apenwarr-redo/t/flush-cache.in
Avery Pennarun f6fe00db5c Directory reorg: move code into redo/, generate binaries in bin/.
It's time to start preparing for a version of redo that doesn't work
unless we build it first (because it will rely on C modules, and
eventually be rewritten in C altogether).

To get rolling, remove the old-style symlinks to the main programs, and
rename those programs from redo-*.py to redo/cmd_*.py.  We'll also move
all library functions into the redo/ dir, which is a more python-style
naming convention.

Previously, install.do was generating wrappers for installing in
/usr/bin, which extend sys.path and then import+run the right file.
This made "installed" redo work quite differently from running redo
inside its source tree.  Instead, let's always generate the wrappers in
bin/, and not make anything executable except those wrappers.

Since we're generating wrappers anyway, let's actually auto-detect the
right version of python for the running system; distros can't seem to
agree on what to call their python2 binaries (sigh). We'll fill in the
right #! shebang lines.  Since we're doing that, we can stop using
/usr/bin/env, which will a) make things slightly faster, and b) let us
use "python -S", which tells python not to load a bunch of extra crap
we're not using, thus improving startup times.

Annoyingly, we now have to build redo using minimal/do, then run the
tests using bin/redo.  To make this less annoying, we add a toplevel
./do script that knows the right steps, and a Makefile (whee!) for
people who are used to typing 'make' and 'make test' and 'make clean'.
2018-12-04 02:53:40 -05:00

36 lines
1.7 KiB
Text
Executable file

import sys, os, sqlite3
if "DO_BUILT" in os.environ:
sys.exit(0)
sys.stderr.write("Flushing redo cache...\n")
db_file = os.path.join(os.environ["REDO_BASE"], ".redo/db.sqlite3")
db = sqlite3.connect(db_file, timeout=5000)
# This is very (overly) tricky. Every time we flush the cache, we run an
# atomic transaction that subtracts 1 from all checked_runid and
# changed_runid values across the entire system. Then when checking
# dependencies, we can see if changed_runid for a given dependency is
# greater than checked_runid for the target, and their *relative* values
# will still be intact! So if a dependency had been built during the
# current run, it will act as if a *previous* run built the dependency but
# the current target was built even earlier. Meanwhile, checked_runid is
# less than REDO_RUNID, so everything will still need to be rechecked.
#
# A second tricky point is that failed_runid is usually null (unless
# building a given target really did fail last time). (null - 1) is still
# null, so this transaction doesn't change failed_runid at all unless it
# really did fail.
#
# Finally, an even more insane problem is that since we decrement these
# values more than once per run, they end up decreasing fairly rapidly.
# But 0 is special! Some code treats failed_runid==0 as if it were null,
# so when we decrement all the way to zero, we get a spurious test failure.
# To avoid this, we initialize the runid to a very large number at database
# creation time.
db.executescript("pragma synchronous = off;"
"update Files set checked_runid=checked_runid-1, "
" changed_runid=changed_runid-1, "
" failed_runid=failed_runid-1;")
db.commit()