This is the test that
x=y f
does *not* unset x after running f, if f is a shell function. Apparently
that's the right thing to do, but freebsd dash 0.5.10.2 fails it. This is
surprising because debian dash 0.5.8-2.4 passes, and it is presumably older.
Downgrade this to a warning because we want freebsd to have some cheap sh
variant that passes with redo, and nobody should be *relying* on this
insane behaviour anyway.
freebsd 11.2 sh (but not freebsd dash) fails test #24. That one seems
rather serious, so I don't want to downgrade it to a warning.
If we end up in builder phase 2, where we might need to
build stuff that was previously locked by someone else,
we will need to obtain a job token *and* the lock at the
same time in order to continue. To prevent deadlocks,
we don't wait synchronously for one lock while holding the
other.
If several instances are fighting over the same lock and
there are insufficient job tokens for everyone, timing
could cause them to fight for a long time. This seems
to happen a lot in freebsd for some reason. To be a good
citizen, sleep for a while after each loop iteration.
This should ensure that eventually, most of the fighting
instances will be asleep by the time the next one tries to
grab the token, thus breaking the deadlock.
-x, -v, and -d are the same as redo.
-c means "continuable", which disables the feature that deletes (and
forgets) all targets at the start of each run. This is a little risky,
since minimal/do still doesn't understand dependencies, but it allows
you to run minimal/do several times in succession, so that
minimal/do -c a
minimal/do -c b
is the same as
minimal/do a b
We previously assumed that redo and redo-ifchange are the same in
minimal/do's design, because it rebuilds all targets on every run, and
so there's no reason to ever build the same target more than once.
Unfortunately that's incorrect: if you run 'redo x' from two points in
a single run (or even twice in the same .do file), we expect x to be
built twice. If you wanted redo to decide whether to build it the
second time, you should have used redo-ifchange.
t/102-empty/touchtest was trying to test for this. However, a
second bug in minimal/do made the test pass anyway. minimal/do would
*always* rebuild any target x that produced no output, not caring
whether it had tried to build before, whether you used redo or
redo-ifchange. And while we tested that redo would redo a file that
had been deleted, we didn't ensure that it would redo a file that was
*not* deleted, nor that redo-ifchange would *not* redo that file.
Fix both bugs in minimal/do, and make t/102-empty/touchtest cover the
missing cases.
We need to create the File object to get its f.id, then lock that id.
During that gap, another instance of redo may have modified the file or
its state data, so we have to refresh it.
This fixes 'redo -j10 t/stress'.
It looks like we're updating the stamp for t/countall while another
task is replacing the file, which suggests a race condition in our
state management database.
The first time we notice a file has been overridden, log the old and
new stamp data, which might give a hint about how this happened.
Currently if I do
rm t/950-curse/countall
while :; do redo -j10 t/950-curse/all --shuffle || break; done
it will end up complaining that countall has been overridden within
just a few runs, even though it definitely hasn't been. There seems to
be someone reading a file stamp while someone else is redoing the
file, but I haven't found it yet.
flush-cache can cause files affected by redo-stamp to get rebuilt
unnecessarily, which the test is specifically trying to validate.
Since other tests run flush-cache at random times when using -j, this
would cause random test failures.
Because the two programs use separate state databases, it helps if we
clean up some temp files between runs. Otherwise they might think you
created some targets "by hand" and refuse to rebuild them.
I'm not quite sure what I was thinking, using redo-ifchange there, but
the result was that some tests wouldn't run if you run 'redo test'
repeatedly, even after modifying redo itself.
Also tweaked t/950-curse so that it always runs, not just the first
time.
We grew a test for these at some point, but minimal/do didn't actually
pass it.
sh syntax oddity: if you say
x=$1 y=$2
then it works fine when $1 and $2 contain spaces. But if you say
export x=$1 y=$2
(or "local" instead of "export") then $1 and $2 will be split on IFS,
and it won't do what you think. I guess this is because 'export' and
'local' are like commands, and command arguments are split if not
quoted.
I feel a little dirty doing this, but the way the code was before, redo
almost always picked bash as the shell. bash is way too overpowered
and this led to bashisms in do scripts unnecessarily. The two failures
in dash are things that I would really like to have, but they haven't
materialized after 6 years, so I guess we should be realistic.
To appropriately penalize bash for asking for trouble, I added a
warning about [ 1 == 1 ] syntax being valid (as opposed to the POSIX
correct [ 1 = 1 ]). This allows dash to be selected ahead of bash.
I also moved 'sh' to the end of the list, because although it's the
weakest shell on some systems, on other systems it's just bash. And I
put zsh in front of bash, because fewer people have zsh and we want
them to test zsh.
The builder was holding lock variables in the loop which means that
sometimes a state.Lock object would be created for the same file-id
twice, triggering the assertion. Assign the lock variables to None to
ensure that the state.Lock objects are destroyed before creating the
next one in the loop.
If a depends on b which depends on a, redo would just freeze. Now it
aborts with a somewhat helpful error message.
[Updated by apenwarr for coding style and to add a test.]
Removed information (ctime) that was causing spurious
"you modified it" warnings.
[apenwarr: This was not quite right. ctime includes some things we
don't want to care about, such as link count, so we have to remove it.
But it also notices changes to st_uid, st_gid, and st_mode, which we do
care about, so now we have to include those explicitly.]
[apenwarr's note: ctime includes extra inode attributes like link
count, which are not important for this check, but which could cause
spurious warnings.]
The >& form is only for file descriptors, passing a file name there is
a bash extension.
$ /bin/dash -c 'echo foo >&/dev/null'
/bin/dash: 1: Syntax error: Bad fd number
WAL mode does make the deadlocking on MacOS go away. This suggests
that pysqlite3 was leaving *read* transactions open for a long time; in
old-style sqlite journals, this prevents anyone from obtaining a write
lock, although it doesn't prevent other concurrent reads.
With WAL journals, writes can happen even while readers are holding a
lock, but the journal doesn't flush until the readers have released it.
This is not a "real" fix but it's fairly harmless, since all redo
instances will exit eventually, and when they do, the WAL journal can
be flushed.
I think we were sometimes leaving half-done sqlite transactions sitting
around for a long time (eg. across sub-calls to .do files). This
seemed to be okay on Linux, but caused sqlite deadlocks on MacOS. Most
likely it's not the operating system, but the sqlite version and
journal mode in use.
In any case, the correct thing to do is to actually commit or rollback
transactions, not leave them hanging around.
...unfortunately this doesn't actually fix my MacOS deadlocks, which
makes me rather nervous.
If ./default.do knows how to build x/y/z, then we will run
./default.do x/y/z x/y/z x__y__z.redo2.tmp
which can correctly generate $3, but then we can fail to rename it to
x/y/z because x/y doesn't exist. This would previously through an
exception. Now it prints a helpful error message.
default.do may create x/y, in which case renaming will succeed.
I think this aligns better with how redo works. Otherwise, if a.do
creates a as a symlink, then changes to the symlink's *target* will
change a's stat/stamp information without re-running a.do, which looks
to redo like you modified a by hand, which causes it to stop running
a.do altogether.
With this change, modifications to a's target are okay, but they don't
trigger any redo dependency changes. If you want that, then a.do
should redo-ifchange on its symlink target explicitly.
For example:
$ redo-whichdo a/b/c/.x.y
- a/b/c.x.y.do
- a/b/default.x.y.do
- a/b/default.y.do
- a/b/default.do
- a/default.x.y.do
- a/default.y.do
- a/default.do
- default.x.y.do
- default.y.do
+ default.do
1 a/b/c.x.y
2 a/b/c.x.y
Lines starting with '-' mean a potential .do file that did not exist,
so we moved onto the next choice (but consider using redo-ifcreate in
case it gets created). '+' means the .do file we actually chose. '1'
and '2' are the $1 and $2 to pass along to the given .do file if you want to
call it for the given target.
(The output format is a little weird to make sure it's parseable with
sh 'read x y' calls, even when filenames contain spaces or special
characters.)
GNU make post-4.2 renamed the --jobserver-fds option to
--jobserver-auth. For compatibility with both older and newer
versions, when we set MAKEFLAGS we set both, and when we read MAKEFLAGS
we will accept either one.
Also, when MAKEFLAGS was not already set, redo would set a MAKEFLAGS with a
leading 'None' string, which was incorrect. It should be the empty
string instead.
...or files that contain bytes but not a trailing newline. It's okay if we
don't get any data, but we definitely have to *not* let "set -e" abort us.
Now that we fixed set -e in the previous patch, it revealed this problem.
If you run something like
blah_function || return 1
then everything even *inside* blah_function is *not* subject to the "set -e"
that would otherwise be in effect. That's true even for ". subfile" inside
blah_function - which is exactly how minimal/do runs .do files.
Instead, rewrite it as
blah_function
[ "$?" = "0" ] || return 1
And add a bit to the unit tests to ensure that "set -e" behaviour is enabled
in .do files as we expect, and crash loudly otherwise.
(This weird behaviour may only happen in some shells and not others.)
Also, we had a "helpful" alias of redo() defined at the bottom of the file.
Combined with the way we use '.' to source the .do files, this would make it
not start a new shell just to run a recursive 'redo' command. It almost
works, but this stupid "set -e" bug could cause a nested .do file to not
honour "set -e" if someone ran "redo foo || exit 1" from inside a .do
script. The performance optimization is clearly not worth it here, so
rename it to _redo(); that causes it to actually re-exec the redo program
(which is a symlink to minimal/do).
If you use "redo --old-args", it will switch back to the old
(apenwarr-style) arguments for now, to give you time to update your .do
scripts. This option will go away eventually.
Note: minimal/do doesn't understand the --old-args option. If you're using
minimal/do in your project, keep using the old one until you update your use
of $1/$2, and then update to the new one.
apenwarr-style default.o.do:
$1 foo
$2 .o
$3 whatever.tmp
djb-style default.o.do:
$1 foo.o
$2 foo
$3 whatever.tmp
apenwarr-style foo.o.do:
$1 foo.o
$2 ""
$3 whatever.tmp
djb-style foo.o.do:
$1 foo.o
$2 foo.o (I think?)
$3 whatever.tmp