Based on the earlier t/000-set-minus-e bug in minimal/do on some
shells, let's add some extra tests that reveal the weirdness on those
shells. Unfortunately because they are so popular (including bash and
zsh), we can't reject them outright for failing this one.
While we're here, add a new return code, "skip", which notes that a
test has failed but is not important enough to be considered a warning
or failure. Previously we just had these commented out, which is not
quite obvious enough.
...and I updated a few comments while reviewing some of the older
tests.
We were setting a global variable FAIL on failure, but if we failed
inside a subshell (which a very small number of tests might do), this
setting would be lost. The script output (a series of failed/warning
lines) was still valid, but not the return code, so the shell might be
selected even if one of these tests failed.
To avoid the problem, put the fail/warning state in the filesystem
instead, which is shared across subshells.
It's time to start preparing for a version of redo that doesn't work
unless we build it first (because it will rely on C modules, and
eventually be rewritten in C altogether).
To get rolling, remove the old-style symlinks to the main programs, and
rename those programs from redo-*.py to redo/cmd_*.py. We'll also move
all library functions into the redo/ dir, which is a more python-style
naming convention.
Previously, install.do was generating wrappers for installing in
/usr/bin, which extend sys.path and then import+run the right file.
This made "installed" redo work quite differently from running redo
inside its source tree. Instead, let's always generate the wrappers in
bin/, and not make anything executable except those wrappers.
Since we're generating wrappers anyway, let's actually auto-detect the
right version of python for the running system; distros can't seem to
agree on what to call their python2 binaries (sigh). We'll fill in the
right #! shebang lines. Since we're doing that, we can stop using
/usr/bin/env, which will a) make things slightly faster, and b) let us
use "python -S", which tells python not to load a bunch of extra crap
we're not using, thus improving startup times.
Annoyingly, we now have to build redo using minimal/do, then run the
tests using bin/redo. To make this less annoying, we add a toplevel
./do script that knows the right steps, and a Makefile (whee!) for
people who are used to typing 'make' and 'make test' and 'make clean'.
It looks like we're updating the stamp for t/countall while another
task is replacing the file, which suggests a race condition in our
state management database.
* master:
Fixed markdown errors in README - code samples now correctly formatted.
Fix use of config.sh in example
log.py, minimal/do: don't use ansi colour codes if $TERM is blank or 'dumb'
Use named constants for terminal control codes.
redo-sh: keep testing even after finding a 'good' shell.
redo-sh.do: hide warning output from 'which' in some shells.
redo-sh.do: wrap long lines.
Handle .do files that start with "#!/" to specify an explicit interpreter.
minimal/do: don't print an error on exit if we don't build anything.
bash completions: also mark 'do' as a completable command.
bash completions: work correctly when $cur is an empty string.
bash completions: call redo-targets for a more complete list.
bash completions: work correctly with subdirs, ie. 'redo t/<tab>'
Sample bash completion rules for redo targets.
minimal/do: faster deletion of stamp files.
minimal/do: delete .tmp files if a build fails.
minimal/do: use ".did" stamp files instead of empty target files.
minimal/do: use posix shell features instead of dirname/basename.
Automatically select a good shell instead of relying on /bin/sh.
Conflicts:
t/clean.do
This includes a fairly detailed test of various known shell bugs from the
autoconf docs.
The idea here is that if redo works on your system, you should be able to
rely on a *good* shell to run your .do files; you shouldn't have to work
around zillions of bugs like autoconf does.
Previously, we would only search for default*.do in the same directory in
the target; now we search parent directories as well.
Let's say we're in a/b/ and trying to build foo.o. If we find
../../default.o.do, then we'll run
cd ../..; sh default.o.do a/b/foo .o $TMPNAME
In other words, we still always chdir to the same directory as the .do file.
But now $1 might have a path in it, not just a basename.
This is slightly inelegant, as the old style
echo foo
echo blah
chmod a+x $3
doesn't work anymore; the stuff you wrote to stdout didn't end up in $3.
You can rewrite it as:
exec >$3
echo foo
echo blah
chmod a+x $3
Anyway, it's better this way, because now we can tell the difference between
a zero-length $3 and a nonexistent one. A .do script can thus produce
either one and we'll either delete the target or move the empty $3 to
replace it, whichever is right.
As a bonus, this simplifies our detection of whether you did something weird
with overlapping changes to stdout and $3.
We would build 'somefile' correctly the first time, but we wouldn't
attach the dependency on somefile to the right $TARGET, so our target would
not auto-rebuild in the future based on somefile.
It actually decreases readability of the .do files - by not making it
explicit when you're going into a subdir.
Plus it adds ambiguity: what if there's a dirname.do *and* a dirname/all?
We could resolve the ambiguity if we wanted, but that adds more code, while
taking out this special case makes *less* code and improves readability.
I think it's the right way to go.