apenwarr-redo/redo/cmd_ifchange.py

70 lines
2.3 KiB
Python
Raw Normal View History

"""redo-ifchange: build the given targets if they have changed."""
import os, sys, traceback
from . import env, builder, deps, helpers, jobserver, logs, state
from .logs import debug2, err
def should_build(t):
f = state.File(name=t)
if f.is_failed():
raise helpers.ImmediateReturn(32)
dirty = deps.isdirty(f, depth='', max_changed=env.v.RUNID,
already_checked=[])
return f.is_generated, dirty == [f] and deps.DIRTY or dirty
def main():
rv = 202
try:
targets = sys.argv[1:]
state.init(targets)
if env.is_toplevel and not targets:
targets = ['all']
if env.is_toplevel and env.v.LOG:
builder.close_stdin()
builder.start_stdin_log_reader(
status=True, details=True,
pretty=True, color=True, debug_locks=False, debug_pids=False)
else:
Workaround for completely broken file locking on Windows 10 WSL. WSL (Windows Services for Linux) provides a Linux-kernel-compatible ABI for userspace processes, but the current version doesn't not implement fcntl() locks at all; it just always returns success. See https://github.com/Microsoft/WSL/issues/1927. This causes us three kinds of problem: 1. sqlite3 in WAL mode gives "OperationalError: locking protocol". 1b. Other sqlite3 journal modes also don't work when used by multiple processes. 2. redo parallelism doesn't work, because we can't prevent the same target from being build several times simultaneously. 3. "redo-log -f" doesn't work, since it can't tell whether the log file it's tailing is "done" or not. To fix #1, we switch the sqlite3 journal back to PERSIST instead of WAL. We originally changed to WAL in commit 5156feae9d to reduce deadlocks on MacOS. That was never adequately explained, but PERSIST still acts weird on MacOS, so we'll only switch to PERSIST when we detect that locking is definitely broken. Sigh. To (mostly) fix #2, we disable any -j value > 1 when locking is broken. This prevents basic forms of parallelism, but doesn't stop you from re-entrantly starting other instances of redo. To fix that properly, we need to switch to a different locking mechanism entirely, which is tough in python. flock() locks probably work, for example, but python's locks lie and just use fcntl locks for those. To fix #3, we always force --no-log mode when we find that locking is broken.
2019-01-02 14:18:51 -05:00
logs.setup(
tty=sys.stderr, parent_logs=env.v.LOG,
pretty=env.v.PRETTY, color=env.v.COLOR)
if env.v.TARGET and not env.v.UNLOCKED:
me = os.path.join(env.v.STARTDIR,
os.path.join(env.v.PWD, env.v.TARGET))
f = state.File(name=me)
debug2('TARGET: %r %r %r\n'
% (env.v.STARTDIR, env.v.PWD, env.v.TARGET))
else:
f = me = None
debug2('redo-ifchange: not adding depends.\n')
jobserver.setup(0)
try:
if f:
for t in targets:
f.add_dep('m', t)
f.save()
state.commit()
rv = builder.run(targets, should_build)
finally:
try:
state.rollback()
finally:
try:
jobserver.force_return_tokens()
except Exception, e: # pylint: disable=broad-except
traceback.print_exc(100, sys.stderr)
err('unexpected error: %r\n' % e)
rv = 1
except (KeyboardInterrupt, helpers.ImmediateReturn):
if env.is_toplevel:
builder.await_log_reader()
sys.exit(200)
state.commit()
if env.is_toplevel:
redo-log: capture and linearize the output of redo builds. redo now saves the stderr from every .do script, for every target, into a file in the .redo directory. That means you can look up the logs from the most recent build of any target using the new redo-log command, for example: redo-log -r all The default is to show logs non-recursively, that is, it'll show when a target does redo-ifchange on another target, but it won't recurse into the logs for the latter target. With -r (recursive), it does. With -u (unchanged), it does even if redo-ifchange discovered that the target was already up-to-date; in that case, it prints the logs of the *most recent* time the target was generated. With --no-details, redo-log will show only the 'redo' lines, not the other log messages. For very noisy build systems (like recursing into a 'make' instance) this can be helpful to get an overview of what happened, without all the cruft. You can use the -f (follow) option like tail -f, to follow a build that's currently in progress until it finishes. redo itself spins up a copy of redo-log -r -f while it runs, so you can see what's going on. Still broken in this version: - No man page or new tests yet. - ANSI colors don't yet work (unless you use --raw-logs, which gives the old-style behaviour). - You can't redirect the output of a sub-redo to a file or a pipe right now, because redo-log is eating it. - The regex for matching 'redo' lines in the log is very gross. Instead, we should put the raw log files in a more machine-parseable format, and redo-log should turn that into human-readable format. - redo-log tries to "linearize" the logs, which makes them comprehensible even for a large parallel build. It recursively shows log messages for each target in depth-first tree order (by tracing into a new target every time it sees a 'redo' line). This works really well, but in some specific cases, the "topmost" redo instance can get stuck waiting for a jwack token, which makes it look like the whole build has stalled, when really redo-log is just waiting a long time for a particular subprocess to be able to continue. We'll need to add a specific workaround for that.
2018-11-03 22:09:18 -04:00
builder.await_log_reader()
sys.exit(rv)
if __name__ == '__main__':
main()