In Python 3, `zip()` returns an iterator that in turn contains references to
`tparts` and `bparts`. Because those two variables are modified within the
loop, the iterator does not advance properly.
Fix this by explicitly obtaining a list.
Python 3 does no longer allow comparisons of different types. Therefore,
explicitly convert f.checked_runid to a number that's always smaller than
`f.changed_runid`
Silently recover if REDO_CHEATFDS file descriptors are closed, because
they aren't completely essential and MAKEFLAGS-related warnings already
get printed if all file descriptors have been closed.
If MAKEFLAGS --jobserver-auth flags are closed, improve the error
message so that a) it's a normal error instead of an exception and b)
we link to documentation about why it happens. Also write some more
detailed documentation about what's going on here.
WSL (Windows Services for Linux) provides a Linux-kernel-compatible ABI
for userspace processes, but the current version doesn't not implement
fcntl() locks at all; it just always returns success. See
https://github.com/Microsoft/WSL/issues/1927.
This causes us three kinds of problem:
1. sqlite3 in WAL mode gives "OperationalError: locking protocol".
1b. Other sqlite3 journal modes also don't work when used by
multiple processes.
2. redo parallelism doesn't work, because we can't prevent the same
target from being build several times simultaneously.
3. "redo-log -f" doesn't work, since it can't tell whether the log
file it's tailing is "done" or not.
To fix#1, we switch the sqlite3 journal back to PERSIST instead of
WAL. We originally changed to WAL in commit 5156feae9d to reduce
deadlocks on MacOS. That was never adequately explained, but PERSIST
still acts weird on MacOS, so we'll only switch to PERSIST when we
detect that locking is definitely broken. Sigh.
To (mostly) fix#2, we disable any -j value > 1 when locking is broken.
This prevents basic forms of parallelism, but doesn't stop you from
re-entrantly starting other instances of redo. To fix that properly,
we need to switch to a different locking mechanism entirely, which is
tough in python. flock() locks probably work, for example, but
python's locks lie and just use fcntl locks for those.
To fix#3, we always force --no-log mode when we find that locking is
broken.
Both redo and minimal/do were doing slightly weird things with
symlinked directories, especially when combined with "..". For
example, if x is a link to ., then x/x/x/x/../y should resolve to
"../y", which is quite non-obvious.
Added some tests to make sure this stays fixed.
Now that the python scripts are all in a "redo" python module, we can
use the "new style" (ahem) package-relative imports. This appeases
pylint, plus avoids confusion in case more than one package has
similarly-named modules.
This removes another instance of magical code running at module import
time. And the process title wasn't really part of the state database
anyway.
Unfortunately this uncovered a bug: the recent change to use
'python -S' makes it not find the setproctitle module if installed.
My goodness, I hate the horrible python easy_install module gunk that
makes startup linearly slower the more modules you have installed,
whether you import them or not, if you don't use -S. But oh well,
we're stuck with it for now.
Merge the two files into env, and make each command explicitly call the
function that sets it up in the way that's needed for that command.
This means we can finally just import all the modules at the top of
each file, without worrying about import order. Phew.
While we're here, remove the weird auto-appending-'all'-to-targets
feature in env.init(). Instead, do it explicitly, and only from redo and
redo-ifchange, only if is_toplevel and no other targets are given.
They really aren't locks at all, they're a cycle detector. Also rename
REDO_LOCKS to a more meaningful REDO_CYCLES. And we'll move the
CyclicDependencyError exception in here as well, instead of state.py
where it doesn't really belong.
It's time to start preparing for a version of redo that doesn't work
unless we build it first (because it will rely on C modules, and
eventually be rewritten in C altogether).
To get rolling, remove the old-style symlinks to the main programs, and
rename those programs from redo-*.py to redo/cmd_*.py. We'll also move
all library functions into the redo/ dir, which is a more python-style
naming convention.
Previously, install.do was generating wrappers for installing in
/usr/bin, which extend sys.path and then import+run the right file.
This made "installed" redo work quite differently from running redo
inside its source tree. Instead, let's always generate the wrappers in
bin/, and not make anything executable except those wrappers.
Since we're generating wrappers anyway, let's actually auto-detect the
right version of python for the running system; distros can't seem to
agree on what to call their python2 binaries (sigh). We'll fill in the
right #! shebang lines. Since we're doing that, we can stop using
/usr/bin/env, which will a) make things slightly faster, and b) let us
use "python -S", which tells python not to load a bunch of extra crap
we're not using, thus improving startup times.
Annoyingly, we now have to build redo using minimal/do, then run the
tests using bin/redo. To make this less annoying, we add a toplevel
./do script that knows the right steps, and a Makefile (whee!) for
people who are used to typing 'make' and 'make test' and 'make clean'.