Previously, if you passed a -j option to a redo process in a redo or
make process hierarchy with MAKEFLAGS already set, it would ignore the
-j option and continue using the jobserver provided by the parent.
With this change, we instead initialize a new jobserver with the
desired number of tokens, which is what GNU make does in the same
situation. A typical use case for this is to force serialization of
build steps in a subtree (by using -j1). In make, this is often useful
for "fixing" makefiles that haven't been written correctly for parallel
builds. In redo, that happens much less often, but it's useful at
least in unit tests.
Passing -j1 is relatively harmless (the redo you are starting inherits
a token anyway, so it doesn't create any new tokens). Passing -j > 1
is more risky, because it creates new tokens, thus increasing the level
of parallelism in the system. Because this may not be what you wanted,
we print a warning when you pass -j > 1 to a sub-redo. GNU make gives
a similar warning in this situation.
Now that the python scripts are all in a "redo" python module, we can
use the "new style" (ahem) package-relative imports. This appeases
pylint, plus avoids confusion in case more than one package has
similarly-named modules.
Merge the two files into env, and make each command explicitly call the
function that sets it up in the way that's needed for that command.
This means we can finally just import all the modules at the top of
each file, without worrying about import order. Phew.
While we're here, remove the weird auto-appending-'all'-to-targets
feature in env.init(). Instead, do it explicitly, and only from redo and
redo-ifchange, only if is_toplevel and no other targets are given.
It's time to start preparing for a version of redo that doesn't work
unless we build it first (because it will rely on C modules, and
eventually be rewritten in C altogether).
To get rolling, remove the old-style symlinks to the main programs, and
rename those programs from redo-*.py to redo/cmd_*.py. We'll also move
all library functions into the redo/ dir, which is a more python-style
naming convention.
Previously, install.do was generating wrappers for installing in
/usr/bin, which extend sys.path and then import+run the right file.
This made "installed" redo work quite differently from running redo
inside its source tree. Instead, let's always generate the wrappers in
bin/, and not make anything executable except those wrappers.
Since we're generating wrappers anyway, let's actually auto-detect the
right version of python for the running system; distros can't seem to
agree on what to call their python2 binaries (sigh). We'll fill in the
right #! shebang lines. Since we're doing that, we can stop using
/usr/bin/env, which will a) make things slightly faster, and b) let us
use "python -S", which tells python not to load a bunch of extra crap
we're not using, thus improving startup times.
Annoyingly, we now have to build redo using minimal/do, then run the
tests using bin/redo. To make this less annoying, we add a toplevel
./do script that knows the right steps, and a Makefile (whee!) for
people who are used to typing 'make' and 'make test' and 'make clean'.