Merge branch 'redo-log'

* redo-log:
  redo-log: add man page.
  redo-log: add automated tests, and fix some path bugs revealed by them.
  redo-log: fix stdout vs stderr; don't recapture if .do script redirects stderr.
  redo-log: don't show status line until >1.0 seconds after starting.
  Add --color and --no-color options.
  redo-log: --debug-pids works properly again.
  Split --raw-logs into --no-pretty and --no-log options.
  redo-log: prioritize the "foreground" process.
  redo-log: correctly indent first level of recursion.
  Raw logs contain @@REDO lines instead of formatted data.
  redo-log: status line should use actual terminal width.
  redo-log: capture and linearize the output of redo builds.
  Use signal.setitimer instead of signal.alarm.
This commit is contained in:
Avery Pennarun 2018-11-19 19:33:04 -05:00
commit 4842996b5b
32 changed files with 1114 additions and 182 deletions

83
Documentation/redo-log.md Normal file
View file

@ -0,0 +1,83 @@
# NAME
redo-log - display log messages from previous builds
# SYNOPSIS
redo-log [options...] [targets...]
# DESCRIPTION
When redo runs, it captures the standard error (stderr) output from the
build activity for each target, and saves it for later. When a target is
rebuilt, the new logs replace the old logs for that target. redo-log
prints the log output for any set of targets.
# OPTIONS
-r, --recursive
: if the requested targets cause any dependencies to be built, recursively
show the logs from those dependencies as well. (And if those
dependencies build further dependencies, also show those logs, and so on.)
-u, --unchanged
: show messages even for dependencies that were unchanged (did not need to be
rebuilt). To do this, we show the logs for the *most recent* build of
each affected dependency. Usually this is used with `-r`.
-f, --follow
: if a build is currently running for any of the requested targets or
their dependencies, follow the logs (like `tail -f`) until the build
finishes.
--no-details
: display *only* the messages from redo itself, not the other messages
produced by build scripts. Generally this gives you a list of which
targets were built, but not detailed logs, warnings, or errors.
--no-status
: don't display the running build status at the bottom of the screen.
(Unless this option is specified, the status line will be enabled
if using --follow, if stderr is a terminal.) If stderr is *not* a
terminal, you can force enable the status line using --status.
--no-pretty
: display "raw" redo log lines (@@REDO events) rather than using a
human-readable format. The default is --pretty.
--no-color
: when using --pretty and writing to a terminal, colorize the output to
make results stand out more clearly. If not writing to a terminal, you
can use --color to force colorized output.
--debug-locks
: print messages about acquiring, releasing, and waiting
on locks. Because redo can be highly parallelized,
one instance may end up waiting for a target to be
built by some other instance before it can continue.
If you suspect this is causing troubles, use this
option to see which instance is waiting and when.
--debug-pids
: add the process id of the particular redo instance to each
output message. This makes it easier to figure out
which sub-instance of redo is doing what.
# REDO
Part of the `redo`(1) suite.
# CREDITS
The original concept for `redo` was created by D. J.
Bernstein and documented on his web site
(http://cr.yp.to/redo.html). This independent implementation
was created by Avery Pennarun and you can find its source
code at http://github.com/apenwarr/redo.
# SEE ALSO
`redo`(1)

View file

@ -86,6 +86,32 @@ dependencies.
can just run three `redo` commands consecutively. can just run three `redo` commands consecutively.
Because your .do script is just a script, it will not Because your .do script is just a script, it will not
be accidentally parallelized. be accidentally parallelized.
--no-details
: display *only* the messages from redo itself, not the other messages
produced by build scripts. Generally this gives you a list of which
targets were built, but not detailed logs, warnings, or errors.
--no-status
: don't display the running build status at the bottom of the screen.
(Unless this option is specified, the status line will be enabled
if using --follow, if stderr is a terminal.) If stderr is *not* a
terminal, you can force enable the status line using --status.
--no-pretty
: display "raw" redo log lines (@@REDO events) rather than using a
human-readable format. The default is --pretty.
--no-color
: when using --pretty and writing to a terminal, colorize the output to
make results stand out more clearly. If not writing to a terminal, you
can use --color to force colorized output.
--no-log
: don't capture stderr log messages from build scripts. This prevents
redo-log from redisplaying the logs later, and if using --jobs, causes
output from all parallel jobs to be jumbled together. This was the
only behaviour available before redo-0.30. The default is --log.
--debug-locks --debug-locks
: print messages about acquiring, releasing, and waiting : print messages about acquiring, releasing, and waiting
@ -100,6 +126,9 @@ dependencies.
output message. This makes it easier to figure out output message. This makes it easier to figure out
which sub-instance of redo is doing what. which sub-instance of redo is doing what.
--version
: displays the redo version number.
# DISCUSSION # DISCUSSION

View file

@ -1,7 +1,8 @@
import sys, os, errno, random, stat, signal, time import sys, os, errno, random, stat, signal, time
import vars, jwack, state, paths import vars, jwack, state, paths
from helpers import unlink, close_on_exec, join from helpers import unlink, close_on_exec, join
from log import log, log_, debug, debug2, err, warn import logs
from logs import debug, debug2, err, warn, meta, check_tty
def _nice(t): def _nice(t):
@ -18,6 +19,84 @@ def _try_stat(filename):
raise raise
log_reader_pid = None
def start_stdin_log_reader(status, details, pretty, color,
debug_locks, debug_pids):
if not vars.LOG: return
global log_reader_pid
r, w = os.pipe() # main pipe to redo-log
ar, aw = os.pipe() # ack pipe from redo-log --ack-fd
sys.stdout.flush()
sys.stderr.flush()
pid = os.fork()
if pid:
# parent
log_reader_pid = pid
os.close(r)
os.close(aw)
b = os.read(ar, 8)
if not b:
# subprocess died without sending us anything: that's bad.
err('failed to start redo-log subprocess; cannot continue.\n')
os._exit(99)
assert b == 'REDO-OK\n'
# now we know the subproc is running and will report our errors
# to stderr, so it's okay to lose our own stderr.
os.close(ar)
os.dup2(w, 1)
os.dup2(w, 2)
os.close(w)
check_tty(sys.stderr, vars.COLOR)
else:
# child
try:
os.close(ar)
os.close(w)
os.dup2(r, 0)
os.close(r)
# redo-log sends to stdout (because if you ask for logs, that's
# the output you wanted!). But redo itself sends logs to stderr
# (because they're incidental to the thing you asked for).
# To make these semantics work, we point redo-log's stdout at
# our stderr when we launch it.
os.dup2(2, 1)
argv = [
'redo-log',
'--recursive', '--follow',
'--ack-fd', str(aw),
('--status' if status and os.isatty(2) else '--no-status'),
('--details' if details else '--no-details'),
('--pretty' if pretty else '--no-pretty'),
('--debug-locks' if debug_locks else '--no-debug-locks'),
('--debug-pids' if debug_pids else '--no-debug-pids'),
]
if color != 1:
argv.append('--color' if color >= 2 else '--no-color')
argv.append('-')
os.execvp(argv[0], argv)
except Exception, e:
sys.stderr.write('redo-log: exec: %s\n' % e)
finally:
os._exit(99)
def await_log_reader():
if not vars.LOG: return
global log_reader_pid
if log_reader_pid > 0:
# never actually close fd#1 or fd#2; insanity awaits.
# replace it with something else instead.
# Since our stdout/stderr are attached to redo-log's stdin,
# this will notify redo-log that it's time to die (after it finishes
# reading the logs)
out = open('/dev/tty', 'w')
os.dup2(out.fileno(), 1)
os.dup2(out.fileno(), 2)
os.waitpid(log_reader_pid, 0)
class ImmediateReturn(Exception): class ImmediateReturn(Exception):
def __init__(self, rv): def __init__(self, rv):
Exception.__init__(self, "immediate return with exit code %d" % rv) Exception.__init__(self, "immediate return with exit code %d" % rv)
@ -44,12 +123,14 @@ class BuildJob:
assert(self.lock.owned) assert(self.lock.owned)
try: try:
try: try:
dirty = self.shouldbuildfunc(self.t) is_target, dirty = self.shouldbuildfunc(self.t)
except state.CyclicDependencyError: except state.CyclicDependencyError:
err('cyclic dependency while checking %s\n' % _nice(self.t)) err('cyclic dependency while checking %s\n' % _nice(self.t))
raise ImmediateReturn(208) raise ImmediateReturn(208)
if not dirty: if not dirty:
# target doesn't need to be built; skip the whole task # target doesn't need to be built; skip the whole task
if is_target:
meta('unchanged', state.target_relpath(self.t))
return self._after2(0) return self._after2(0)
except ImmediateReturn, e: except ImmediateReturn, e:
return self._after2(e.rv) return self._after2(e.rv)
@ -94,7 +175,7 @@ class BuildJob:
sf.save() sf.save()
return self._after2(0) return self._after2(0)
else: else:
err('no rule to make %r\n' % t) err('no rule to redo %r\n' % t)
return self._after2(1) return self._after2(1)
unlink(self.tmpname1) unlink(self.tmpname1)
unlink(self.tmpname2) unlink(self.tmpname2)
@ -113,11 +194,13 @@ class BuildJob:
] ]
if vars.VERBOSE: argv[1] += 'v' if vars.VERBOSE: argv[1] += 'v'
if vars.XTRACE: argv[1] += 'x' if vars.XTRACE: argv[1] += 'x'
if vars.VERBOSE or vars.XTRACE: log_('\n')
firstline = open(os.path.join(dodir, dofile)).readline().strip() firstline = open(os.path.join(dodir, dofile)).readline().strip()
if firstline.startswith('#!/'): if firstline.startswith('#!/'):
argv[0:2] = firstline[2:].split(' ') argv[0:2] = firstline[2:].split(' ')
log('%s\n' % _nice(t)) # make sure to create the logfile *before* writing the log about it.
# that way redo-log won't trace into an obsolete logfile.
if vars.LOG: open(state.logname(self.sf.id), 'w')
meta('do', state.target_relpath(t))
self.dodir = dodir self.dodir = dodir
self.basename = basename self.basename = basename
self.ext = ext self.ext = ext
@ -141,11 +224,14 @@ class BuildJob:
# hold onto the lock because otherwise we would introduce a race # hold onto the lock because otherwise we would introduce a race
# condition; that's why it's called redo-unlocked, because it doesn't # condition; that's why it's called redo-unlocked, because it doesn't
# grab a lock. # grab a lock.
argv = ['redo-unlocked', self.sf.name] + [d.name for d in dirty] here = os.getcwd()
log('(%s)\n' % _nice(self.t)) def _fix(p):
return state.relpath(os.path.join(vars.BASE, p), here)
argv = (['redo-unlocked', _fix(self.sf.name)] +
[_fix(d.name) for d in dirty])
meta('check', state.target_relpath(self.t))
state.commit() state.commit()
def run(): def run():
os.chdir(vars.BASE)
os.environ['REDO_DEPTH'] = vars.DEPTH + ' ' os.environ['REDO_DEPTH'] = vars.DEPTH + ' '
signal.signal(signal.SIGPIPE, signal.SIG_DFL) # python ignores SIGPIPE signal.signal(signal.SIGPIPE, signal.SIG_DFL) # python ignores SIGPIPE
os.execvp(argv[0], argv) os.execvp(argv[0], argv)
@ -173,9 +259,32 @@ class BuildJob:
os.dup2(self.f.fileno(), 1) os.dup2(self.f.fileno(), 1)
os.close(self.f.fileno()) os.close(self.f.fileno())
close_on_exec(1, False) close_on_exec(1, False)
if vars.LOG:
cur_inode = str(os.fstat(2).st_ino)
if not vars.LOG_INODE or cur_inode == vars.LOG_INODE:
# .do script has *not* redirected stderr, which means we're
# using redo-log's log saving mode. That means subprocs
# should be logged to their own file. If the .do script
# *does* redirect stderr, that redirection should be inherited
# by subprocs, so we'd do nothing.
logf = open(state.logname(self.sf.id), 'w')
new_inode = str(os.fstat(logf.fileno()).st_ino)
os.environ['REDO_LOG'] = '1' # .do files can check this
os.environ['REDO_LOG_INODE'] = new_inode
os.dup2(logf.fileno(), 2)
close_on_exec(2, False)
logf.close()
else:
if 'REDO_LOG_INODE' in os.environ:
del os.environ['REDO_LOG_INODE']
os.environ['REDO_LOG'] = ''
signal.signal(signal.SIGPIPE, signal.SIG_DFL) # python ignores SIGPIPE signal.signal(signal.SIGPIPE, signal.SIG_DFL) # python ignores SIGPIPE
if vars.VERBOSE or vars.XTRACE: log_('* %s\n' % ' '.join(self.argv)) if vars.VERBOSE or vars.XTRACE:
logs.write('* %s' % ' '.join(self.argv).replace('\n', ' '))
os.execvp(self.argv[0], self.argv) os.execvp(self.argv[0], self.argv)
# FIXME: it would be nice to log the exit code to logf.
# But that would have to happen in the parent process, which doesn't
# have logf open.
assert(0) assert(0)
# returns only if there's an exception # returns only if there's an exception
@ -250,11 +359,7 @@ class BuildJob:
sf.zap_deps2() sf.zap_deps2()
sf.save() sf.save()
f.close() f.close()
if rv != 0: meta('done', '%d %s' % (rv, state.target_relpath(self.t)))
err('%s: exit code %r\n' % (_nice(t),rv))
else:
if vars.VERBOSE or vars.XTRACE or vars.DEBUG:
log('%s (done)\n\n' % _nice(t))
return rv return rv
def _after2(self, rv): def _after2(self, rv):
@ -276,6 +381,27 @@ def main(targets, shouldbuildfunc):
def done(t, rv): def done(t, rv):
if rv: if rv:
retcode[0] = 1 retcode[0] = 1
if vars.TARGET and not vars.UNLOCKED:
me = os.path.join(vars.STARTDIR,
os.path.join(vars.PWD, vars.TARGET))
myfile = state.File(name=me)
selflock = state.Lock(state.LOG_LOCK_MAGIC + myfile.id)
else:
selflock = myfile = me = None
def cheat():
if not selflock: return 0
selflock.trylock()
if not selflock.owned:
# redo-log already owns it: let's cheat.
# Give ourselves one extra token so that the "foreground" log
# can always make progress.
return 1
else:
# redo-log isn't watching us (yet)
selflock.unlock()
return 0
# In the first cycle, we just build as much as we can without worrying # In the first cycle, we just build as much as we can without worrying
# about any lock contention. If someone else has it locked, we move on. # about any lock contention. If someone else has it locked, we move on.
@ -292,7 +418,7 @@ def main(targets, shouldbuildfunc):
seen[t] = 1 seen[t] = 1
if not jwack.has_token(): if not jwack.has_token():
state.commit() state.commit()
jwack.get_token(t) jwack.ensure_token_or_cheat(t, cheat)
if retcode[0] and not vars.KEEP_GOING: if retcode[0] and not vars.KEEP_GOING:
break break
if not state.check_sane(): if not state.check_sane():
@ -306,9 +432,8 @@ def main(targets, shouldbuildfunc):
else: else:
lock.trylock() lock.trylock()
if not lock.owned: if not lock.owned:
if vars.DEBUG_LOCKS: meta('locked', state.target_relpath(t))
log('%s (locked...)\n' % _nice(t)) locked.append((f.id,t,f.name))
locked.append((f.id,t))
else: else:
# We had to create f before we had a lock, because we need f.id # We had to create f before we had a lock, because we need f.id
# to make the lock. But someone may have updated the state # to make the lock. But someone may have updated the state
@ -333,6 +458,8 @@ def main(targets, shouldbuildfunc):
while locked or jwack.running(): while locked or jwack.running():
state.commit() state.commit()
jwack.wait_all() jwack.wait_all()
assert jwack._mytokens == 0
jwack.ensure_token_or_cheat('self', cheat)
# at this point, we don't have any children holding any tokens, so # at this point, we don't have any children holding any tokens, so
# it's okay to block below. # it's okay to block below.
if retcode[0] and not vars.KEEP_GOING: if retcode[0] and not vars.KEEP_GOING:
@ -342,7 +469,7 @@ def main(targets, shouldbuildfunc):
err('.redo directory disappeared; cannot continue.\n') err('.redo directory disappeared; cannot continue.\n')
retcode[0] = 205 retcode[0] = 205
break break
fid,t = locked.pop(0) fid,t,fname = locked.pop(0)
lock = state.Lock(fid) lock = state.Lock(fid)
backoff = 0.01 backoff = 0.01
lock.trylock() lock.trylock()
@ -351,8 +478,9 @@ def main(targets, shouldbuildfunc):
import random import random
time.sleep(random.random() * min(backoff, 1.0)) time.sleep(random.random() * min(backoff, 1.0))
backoff *= 2 backoff *= 2
if vars.DEBUG_LOCKS: # after printing this line, redo-log will recurse into t,
warn('%s (WAITING)\n' % _nice(t)) # whether it's us building it, or someone else.
meta('waiting', state.target_relpath(t))
try: try:
lock.check() lock.check()
except state.CyclicDependencyError: except state.CyclicDependencyError:
@ -361,16 +489,17 @@ def main(targets, shouldbuildfunc):
return retcode[0] return retcode[0]
# this sequence looks a little silly, but the idea is to # this sequence looks a little silly, but the idea is to
# give up our personal token while we wait for the lock to # give up our personal token while we wait for the lock to
# be released; but we should never run get_token() while # be released; but we should never run ensure_token() while
# holding a lock, or we could cause deadlocks. # holding a lock, or we could cause deadlocks.
jwack.release_mine() jwack.release_mine()
lock.waitlock() lock.waitlock()
# now t is definitely free, so we get to decide whether
# to build it.
lock.unlock() lock.unlock()
jwack.get_token(t) jwack.ensure_token_or_cheat(t, cheat)
lock.trylock() lock.trylock()
assert(lock.owned) assert(lock.owned)
if vars.DEBUG_LOCKS: meta('unlocked', state.target_relpath(t))
log('%s (...unlocked!)\n' % _nice(t))
if state.File(name=t).is_failed(): if state.File(name=t).is_failed():
err('%s: failed in another thread\n' % _nice(t)) err('%s: failed in another thread\n' % _nice(t))
retcode[0] = 2 retcode[0] = 2

View file

@ -6,7 +6,7 @@ if [ -e .do_built ]; then
done <.do_built done <.do_built
fi fi
[ -z "$DO_BUILT" ] && rm -rf .do_built .do_built.dir [ -z "$DO_BUILT" ] && rm -rf .do_built .do_built.dir
rm -rf minimal/.do_built minimal/.do_built.dir rm -rf minimal/.do_built minimal/.do_built.dir website
redo t/clean Documentation/clean version/clean redo t/clean Documentation/clean version/clean
rm -f *~ .*~ */*~ */.*~ *.pyc install.wrapper rm -f *~ .*~ */*~ */.*~ *.pyc install.wrapper
find . -name '*.tmp' -exec rm -fv {} \; find . -name '*.tmp' -exec rm -fv {} \;

View file

@ -1,6 +1,6 @@
import sys, os import sys, os
import vars, state, builder import vars, state, builder
from log import debug from logs import debug
CLEAN = 0 CLEAN = 0
DIRTY = 1 DIRTY = 1

347
jwack.py
View file

@ -1,42 +1,150 @@
# #
# beware the jobberwack # Implementation of a GNU make-compatible jobserver.
#
# The basic idea is that both ends of a pipe (tokenfds) are shared with all
# subprocesses. At startup, we write one "token" into the pipe for each
# configured job. (So eg. redo -j20 will put 20 tokens in the pipe.) In
# order to do work, you must first obtain a token, by reading the other
# end of the pipe. When you're done working, you write the token back into
# the pipe so that someone else can grab it.
#
# The toplevel process in the hierarchy is what creates the pipes in the
# first place. Then it puts the pipe file descriptor numbers into MAKEFLAGS,
# so that subprocesses can pull them back out.
#
# As usual, edge cases make all this a bit tricky:
#
# - Every process is defined as owning a token at startup time. This makes
# sense because it's backward compatible with single-process make: if a
# subprocess neither reads nor writes the pipe, then it has exactly one
# token, so it's allowed to do one thread of work.
#
# - Thus, for symmetry, processes also must own a token at exit time.
#
# - In turn, to make *that* work, a parent process must destroy *its* token
# upon launching a subprocess. (Destroy, not release, because the
# subprocess has created its own token.) It can try to obtain another
# token, but if none are available, it has to stop work until one of its
# subprocesses finishes. When the subprocess finishes, its token is
# destroyed, so the parent creates a new one.
#
# - If our process is going to stop and wait for a lock (eg. because we
# depend on a target and someone else is already building that target),
# we must give up our token. Otherwise, we're sucking up a "thread" (a
# unit of parallelism) just to do nothing. If enough processes are waiting
# on a particular lock, then the process building that target might end up
# with only a single token, and everything gets serialized.
#
# - Unfortunately this leads to a problem: if we give up our token, we then
# have to re-acquire a token before exiting, even if we want to exit with
# an error code.
#
# - redo-log wants to linearize output so that it always prints log messages
# in the order jobs were started; but because of the above, a job being
# logged might end up with no tokens for a long time, waiting for some
# other branch of the build to complete.
#
# As a result, we extend beyond GNU make's model and make things even more
# complicated. We add a second pipe, cheatfds, which we use to "cheat" on
# tokens if our particular job is in the foreground (ie. is the one
# currently being tailed by redo-log -f). We add at most one token per
# redo-log instance. If we are the foreground task, and we need a token,
# and we don't have a token, and we don't have any subtasks (because if we
# had a subtask, then we're not in the foreground), we synthesize our own
# token by incrementing _mytokens and _cheats, but we don't read from
# tokenfds. Then, when it's time to give up our token again, we also won't
# write back to tokenfds, so the synthesized token disappears.
#
# Of course, all that then leads to *another* problem: every process must
# hold a *real* token when it exits, because its parent has given up a
# *real* token in order to start this subprocess. If we're holding a cheat
# token when it's time to exit, then we can't meet this requirement. The
# obvious thing to do would be to give up the cheat token and wait for a
# real token, but that might take a very long time, and if we're the last
# thing preventing our parent from exiting, then redo-log will sit around
# following our parent until we finally get a token so we can exit,
# defeating the whole purpose of cheating. Instead of waiting, we write our
# "cheater" token to cheatfds. Then, any task, upon noticing one of its
# subprocesses has finished, will check to see if there are any tokens on
# cheatfds; if so, it will remove one of them and *not* re-create its
# child's token, thus destroying the cheater token from earlier, and restoring
# balance.
#
# Sorry this is so complicated. I couldn't think of a way to make it
# simpler :)
# #
import sys, os, errno, select, fcntl, signal import sys, os, errno, select, fcntl, signal
from helpers import atoi, close_on_exec from helpers import atoi, close_on_exec
import state import state, vars
_toplevel = 0 _toplevel = 0
_mytokens = 1 _mytokens = 1
_fds = None _cheats = 0
_tokenfds = None
_cheatfds = None
_waitfds = {} _waitfds = {}
def _debug(s): def _debug(s):
if 0: if 0:
sys.stderr.write('jwack#%d: %s' % (os.getpid(),s)) sys.stderr.write('jwack#%d: %s' % (os.getpid(),s))
def _create_tokens(n):
global _mytokens, _cheats
assert n >= 0
assert _cheats >= 0
for _ in xrange(n):
if _cheats > 0:
_cheats -= 1
else:
_mytokens += 1
def _destroy_tokens(n):
global _mytokens
assert _mytokens >= n
_mytokens -= n
def _release(n): def _release(n):
global _mytokens global _mytokens, _cheats
_mytokens += n assert n >= 0
_debug('release(%d) -> %d\n' % (n, _mytokens)) assert _mytokens >= n
if _mytokens > 1: _debug('%d,%d -> release(%d)\n' % (_mytokens, _cheats, n))
os.write(_fds[1], 't' * (_mytokens-1)) n_to_share = 0
_mytokens = 1 for _ in xrange(n):
_mytokens -= 1
if _cheats > 0:
_cheats -= 1
else:
n_to_share += 1
assert _mytokens >= 0
assert _cheats >= 0
if n_to_share:
_debug('PUT tokenfds %d\n' % n_to_share)
os.write(_tokenfds[1], 't' * n_to_share)
def _release_except_mine():
assert _mytokens > 0
_release(_mytokens - 1)
def release_mine(): def release_mine():
global _mytokens global _mytokens
assert(_mytokens >= 1) assert _mytokens >= 1
_mytokens -= 1 _debug('%d,%d -> release_mine()\n' % (_mytokens, _cheats))
_debug('release_mine() -> %d\n' % _mytokens) _release(1)
os.write(_fds[1], 't')
def _timeout(sig, frame): def _timeout(sig, frame):
pass pass
# We make the pipes use the first available fd numbers starting at startfd.
# This makes it easier to differentiate different kinds of pipes when using
# strace.
def _make_pipe(startfd): def _make_pipe(startfd):
(a,b) = os.pipe() (a,b) = os.pipe()
fds = (fcntl.fcntl(a, fcntl.F_DUPFD, startfd), fds = (fcntl.fcntl(a, fcntl.F_DUPFD, startfd),
@ -47,38 +155,51 @@ def _make_pipe(startfd):
def _try_read(fd, n): def _try_read(fd, n):
"""Try to read n bytes from fd. Returns: '' on EOF, None if EAGAIN."""
assert state.is_flushed()
# using djb's suggested way of doing non-blocking reads from a blocking # using djb's suggested way of doing non-blocking reads from a blocking
# socket: http://cr.yp.to/unix/nonblock.html # socket: http://cr.yp.to/unix/nonblock.html
# We can't just make the socket non-blocking, because we want to be # We can't just make the socket non-blocking, because we want to be
# compatible with GNU Make, and they can't handle it. # compatible with GNU Make, and they can't handle it.
r,w,x = select.select([fd], [], [], 0) r,w,x = select.select([fd], [], [], 0)
if not r: if not r:
return '' # try again return None # try again
# ok, the socket is readable - but some other process might get there # ok, the socket is readable - but some other process might get there
# first. We have to set an alarm() in case our read() gets stuck. # first. We have to set an alarm() in case our read() gets stuck.
assert(state.is_flushed())
oldh = signal.signal(signal.SIGALRM, _timeout) oldh = signal.signal(signal.SIGALRM, _timeout)
try: try:
signal.alarm(1) # emergency fallback signal.setitimer(signal.ITIMER_REAL, 0.01, 0.01) # emergency fallback
try: try:
b = os.read(_fds[0], 1) b = os.read(fd, 1)
except OSError, e: except OSError, e:
if e.errno in (errno.EAGAIN, errno.EINTR): if e.errno in (errno.EAGAIN, errno.EINTR):
# interrupted or it was nonblocking # interrupted or it was nonblocking
return '' # try again return None # try again
else: else:
raise raise
finally: finally:
signal.alarm(0) signal.setitimer(signal.ITIMER_REAL, 0, 0)
signal.signal(signal.SIGALRM, oldh) signal.signal(signal.SIGALRM, oldh)
return b and b or None # None means EOF return b
def _try_read_all(fd, n):
bb = ''
while 1:
b = _try_read(fd, n)
if not b:
break
bb += b
return bb
def setup(maxjobs): def setup(maxjobs):
global _fds, _toplevel global _tokenfds, _cheatfds, _toplevel
if _fds: assert maxjobs > 0
return # already set up assert not _tokenfds
_debug('setup(%d)\n' % maxjobs) _debug('setup(%d)\n' % maxjobs)
flags = ' ' + os.getenv('MAKEFLAGS', '') + ' ' flags = ' ' + os.getenv('MAKEFLAGS', '') + ' '
FIND1 = ' --jobserver-auth=' # renamed in GNU make 4.2 FIND1 = ' --jobserver-auth=' # renamed in GNU make 4.2
FIND2 = ' --jobserver-fds=' # fallback syntax FIND2 = ' --jobserver-fds=' # fallback syntax
@ -103,38 +224,64 @@ def setup(maxjobs):
raise ValueError('broken --jobserver-auth from make; prefix your Makefile rule with a "+"') raise ValueError('broken --jobserver-auth from make; prefix your Makefile rule with a "+"')
else: else:
raise raise
_fds = (a,b) _tokenfds = (a,b)
if maxjobs and not _fds:
cheats = os.getenv('REDO_CHEATFDS', '')
if cheats:
(a,b) = cheats.split(',', 1)
a = atoi(a)
b = atoi(b)
if a <= 0 or b <= 0:
raise ValueError('invalid REDO_CHEATFDS: %r' % cheats)
_cheatfds = (a,b)
if not _tokenfds:
# need to start a new server # need to start a new server
_toplevel = maxjobs _toplevel = maxjobs
_fds = _make_pipe(100) _tokenfds = _make_pipe(100)
_release(maxjobs-1) _create_tokens(maxjobs - 1)
_release_except_mine()
os.putenv('MAKEFLAGS', os.putenv('MAKEFLAGS',
'%s -j --jobserver-auth=%d,%d --jobserver-fds=%d,%d' % '%s -j --jobserver-auth=%d,%d --jobserver-fds=%d,%d' %
(os.getenv('MAKEFLAGS', ''), (os.getenv('MAKEFLAGS', ''),
_fds[0], _fds[1], _tokenfds[0], _tokenfds[1],
_fds[0], _fds[1])) _tokenfds[0], _tokenfds[1]))
_cheatfds = _make_pipe(102)
os.putenv('REDO_CHEATFDS', '%d,%d' % (_cheatfds[0], _cheatfds[1]))
def wait(want_token): def _wait(want_token, max_delay):
rfds = _waitfds.keys() rfds = _waitfds.keys()
if _fds and want_token: if want_token:
rfds.append(_fds[0]) rfds.append(_tokenfds[0])
assert(rfds) assert rfds
assert(state.is_flushed()) assert state.is_flushed()
r,w,x = select.select(rfds, [], []) r,w,x = select.select(rfds, [], [], max_delay)
_debug('_fds=%r; wfds=%r; readable: %r\n' % (_fds, _waitfds, r)) _debug('_tokenfds=%r; wfds=%r; readable: %r\n' % (_tokenfds, _waitfds, r))
for fd in r: for fd in r:
if _fds and fd == _fds[0]: if fd == _tokenfds[0]:
pass pass
else: else:
pd = _waitfds[fd] pd = _waitfds[fd]
_debug("done: %r\n" % pd.name) _debug("done: %r\n" % pd.name)
_release(1) # redo subprocesses are expected to die without releasing their
# tokens, so things are less likely to get confused if they
# die abnormally. That means a token has 'disappeared' and we
# now need to recreate it.
b = _try_read(_cheatfds[0], 1)
_debug('GOT cheatfd\n')
if b == None:
_create_tokens(1)
if has_token():
_release_except_mine()
else:
# someone exited with _cheats > 0, so we need to compensate
# by *not* re-creating a token now.
pass
os.close(fd) os.close(fd)
del _waitfds[fd] del _waitfds[fd]
rv = os.waitpid(pd.pid, 0) rv = os.waitpid(pd.pid, 0)
assert(rv[0] == pd.pid) assert rv[0] == pd.pid
_debug("done1: rv=%r\n" % (rv,)) _debug("done1: rv=%r\n" % (rv,))
rv = rv[1] rv = rv[1]
if os.WIFEXITED(rv): if os.WIFEXITED(rv):
@ -146,36 +293,59 @@ def wait(want_token):
def has_token(): def has_token():
assert _mytokens >= 0
if _mytokens >= 1: if _mytokens >= 1:
return True return True
def get_token(reason): def ensure_token(reason, max_delay=None):
assert(state.is_flushed())
global _mytokens global _mytokens
assert(_mytokens <= 1) assert state.is_flushed()
setup(1) assert _mytokens <= 1
while 1: while 1:
if _mytokens >= 1: if _mytokens >= 1:
_debug("_mytokens is %d\n" % _mytokens) _debug("_mytokens is %d\n" % _mytokens)
assert(_mytokens == 1) assert _mytokens == 1
_debug('(%r) used my own token...\n' % reason) _debug('(%r) used my own token...\n' % reason)
break break
assert(_mytokens < 1) assert _mytokens < 1
_debug('(%r) waiting for tokens...\n' % reason) _debug('(%r) waiting for tokens...\n' % reason)
wait(want_token=1) _wait(want_token=1, max_delay=max_delay)
if _mytokens >= 1: if _mytokens >= 1:
break break
assert(_mytokens < 1) assert _mytokens < 1
if _fds: b = _try_read(_tokenfds[0], 1)
b = _try_read(_fds[0], 1) _debug('GOT tokenfd\n')
if b == None: if b == '':
raise Exception('unexpected EOF on token read') raise Exception('unexpected EOF on token read')
if b: if b:
_mytokens += 1 _mytokens += 1
_debug('(%r) got a token (%r).\n' % (reason, b)) _debug('(%r) got a token (%r).\n' % (reason, b))
break
if max_delay != None:
break
assert _mytokens <= 1
def ensure_token_or_cheat(reason, cheatfunc):
global _mytokens, _cheats
backoff = 0.01
while not has_token():
while running() and not has_token():
# If we already have a subproc running, then effectively we
# already have a token. Don't create a cheater token unless
# we're completely idle.
ensure_token(reason, max_delay=None)
ensure_token(reason, max_delay=min(1.0, backoff))
backoff *= 2
if not has_token():
assert _mytokens == 0
n = cheatfunc()
_debug('%s: %s: cheat = %d\n' % (vars.TARGET, reason, n))
if n > 0:
_mytokens += n
_cheats += n
break break
assert(_mytokens <= 1)
def running(): def running():
@ -183,37 +353,50 @@ def running():
def wait_all(): def wait_all():
_debug("wait_all\n") _debug("%d,%d -> wait_all\n" % (_mytokens, _cheats))
assert(state.is_flushed()) assert state.is_flushed()
while running(): while 1:
while _mytokens >= 1: while _mytokens >= 1:
release_mine() release_mine()
if not running():
break
_debug("wait_all: wait()\n") _debug("wait_all: wait()\n")
wait(want_token=0) _wait(want_token=0, max_delay=None)
_debug("wait_all: empty list\n") _debug("wait_all: empty list\n")
get_token('self') # get my token back
if _toplevel: if _toplevel:
bb = '' # If we're the toplevel and we're sure no child processes remain,
while 1: # then we know we're totally idle. Self-test to ensure no tokens
b = _try_read(_fds[0], 8192) # mysteriously got created/destroyed.
bb += b tokens = _try_read_all(_tokenfds[0], 8192)
if not b: break cheats = _try_read_all(_cheatfds[0], 8192)
if len(bb) != _toplevel-1: _debug('toplevel: GOT %d tokens and %d cheats\n'
raise Exception('on exit: expected %d tokens; found %r' % (len(tokens), len(cheats)))
% (_toplevel-1, len(bb))) if len(tokens) - len(cheats) != _toplevel:
os.write(_fds[1], bb) raise Exception('on exit: expected %d tokens; found %r-%r'
% (_toplevel, len(tokens), len(cheats)))
os.write(_tokenfds[1], tokens)
# note: when we return, we have *no* tokens, not even our own!
# If caller wants to continue, they have to obtain one right away.
def force_return_tokens(): def force_return_tokens():
n = len(_waitfds) n = len(_waitfds)
if n: _debug('%d,%d -> %d jobs left in force_return_tokens\n'
_debug('%d tokens left in force_return_tokens\n' % n) % (_mytokens, _cheats, n))
_debug('returning %d tokens\n' % n)
for k in _waitfds.keys(): for k in _waitfds.keys():
del _waitfds[k] del _waitfds[k]
if _fds: _create_tokens(n)
_release(n) if has_token():
assert(state.is_flushed()) _release_except_mine()
assert _mytokens == 1, 'mytokens=%d' % _mytokens
assert _cheats <= _mytokens, 'mytokens=%d cheats=%d' % (_mytokens, _cheats)
assert _cheats in (0, 1), 'cheats=%d' % _cheats
if _cheats:
_debug('%d,%d -> force_return_tokens: recovering final token\n'
% (_mytokens, _cheats))
_destroy_tokens(_cheats)
os.write(_cheatfds[1], 't' * _cheats)
assert state.is_flushed()
def _pre_job(r, w, pfn): def _pre_job(r, w, pfn):
@ -234,13 +417,13 @@ class Job:
def start_job(reason, jobfunc, donefunc): def start_job(reason, jobfunc, donefunc):
assert(state.is_flushed()) assert state.is_flushed()
global _mytokens global _mytokens
assert(_mytokens <= 1) assert _mytokens <= 1
get_token(reason) assert _mytokens == 1
assert(_mytokens >= 1) # Subprocesses always start with 1 token, so we have to destroy ours
assert(_mytokens == 1) # in order for the universe to stay in balance.
_mytokens -= 1 _destroy_tokens(1)
r,w = _make_pipe(50) r,w = _make_pipe(50)
pid = os.fork() pid = os.fork()
if pid == 0: if pid == 0:

47
log.py
View file

@ -1,47 +0,0 @@
import sys, os
import vars
# By default, no output colouring.
RED = ""
GREEN = ""
YELLOW = ""
BOLD = ""
PLAIN = ""
if sys.stderr.isatty() and (os.environ.get('TERM') or 'dumb') != 'dumb':
# ...use ANSI formatting codes.
RED = "\x1b[31m"
GREEN = "\x1b[32m"
YELLOW = "\x1b[33m"
BOLD = "\x1b[1m"
PLAIN = "\x1b[m"
def log_(s):
sys.stdout.flush()
if vars.DEBUG_PIDS:
sys.stderr.write('%d %s' % (os.getpid(), s))
else:
sys.stderr.write(s)
sys.stderr.flush()
def log(s):
log_(''.join([GREEN, "redo ", vars.DEPTH, BOLD, s, PLAIN]))
def err(s):
log_(''.join([RED, "redo ", vars.DEPTH, BOLD, s, PLAIN]))
def warn(s):
log_(''.join([YELLOW, "redo ", vars.DEPTH, BOLD, s, PLAIN]))
def debug(s):
if vars.DEBUG >= 1:
log_('redo: %s%s' % (vars.DEPTH, s))
def debug2(s):
if vars.DEBUG >= 2:
log_('redo: %s%s' % (vars.DEPTH, s))
def debug3(s):
if vars.DEBUG >= 3:
log_('redo: %s%s' % (vars.DEPTH, s))

149
logs.py Normal file
View file

@ -0,0 +1,149 @@
import os, re, sys, time
import vars
def check_tty(file, color):
global RED, GREEN, YELLOW, BOLD, PLAIN
color_ok = file.isatty() and (os.environ.get('TERM') or 'dumb') != 'dumb'
if (color and color_ok) or color >= 2:
# ...use ANSI formatting codes.
RED = "\x1b[31m"
GREEN = "\x1b[32m"
YELLOW = "\x1b[33m"
BOLD = "\x1b[1m"
PLAIN = "\x1b[m"
else:
RED = ""
GREEN = ""
YELLOW = ""
BOLD = ""
PLAIN = ""
class RawLog(object):
def __init__(self, file):
self.file = file
def write(self, s):
assert('\n' not in s)
sys.stdout.flush()
sys.stderr.flush()
self.file.write(s + '\n')
self.file.flush()
REDO_RE = re.compile(r'@@REDO:([^@]+)@@ (.*)$')
class PrettyLog(object):
def __init__(self, file):
self.topdir = os.getcwd()
self.file = file
def _pretty(self, pid, color, s):
if vars.DEBUG_PIDS:
redo = '%-6d redo ' % pid
else:
redo = 'redo '
self.file.write(''.join([color, redo, vars.DEPTH,
BOLD if color else '', s, PLAIN, '\n']))
def write(self, s):
assert('\n' not in s)
sys.stdout.flush()
sys.stderr.flush()
g = REDO_RE.match(s)
if g:
all = g.group(0)
self.file.write(s[:-len(all)])
words = g.group(1).split(':')
text = g.group(2)
kind, pid, when = words[0:3]
pid = int(pid)
if kind == 'unchanged':
self._pretty(pid, '', '%s (unchanged)' % text)
elif kind == 'check':
self._pretty(pid, GREEN, '(%s)' % text)
elif kind == 'do':
self._pretty(pid, GREEN, text)
elif kind == 'done':
rv, name = text.split(' ', 1)
rv = int(rv)
if rv:
self._pretty(pid, RED, '%s (exit %d)' % (name, rv))
elif vars.VERBOSE or vars.XTRACE or vars.DEBUG:
self._pretty(pid, GREEN, '%s (done)' % name)
self.file.write('\n')
elif kind == 'locked':
if vars.DEBUG_LOCKS:
self._pretty(pid, GREEN, '%s (locked...)' % text)
elif kind == 'waiting':
if vars.DEBUG_LOCKS:
self._pretty(pid, GREEN, '%s (WAITING)' % text)
elif kind == 'unlocked':
if vars.DEBUG_LOCKS:
self._pretty(pid, GREEN, '%s (...unlocked!)' % text)
elif kind == 'error':
self.file.write(''.join([RED, 'redo: ',
BOLD, text, PLAIN, '\n']))
elif kind == 'warning':
self.file.write(''.join([YELLOW, 'redo: ',
BOLD, text, PLAIN, '\n']))
elif kind == 'debug':
self._pretty(pid, '', text)
else:
assert 0, 'Unexpected @@REDO kind: %r' % kind
else:
self.file.write(s + '\n')
self.file.flush()
_log = None
def setup(file, pretty, color):
global _log
if pretty or vars.PRETTY:
check_tty(file, color=color)
_log = PrettyLog(file=file)
else:
_log = RawLog(file=file)
# FIXME: explicitly initialize in each program, for clarity
setup(file=sys.stderr, pretty=vars.PRETTY, color=vars.COLOR)
def write(s):
_log.write(s)
def meta(kind, s, pid=None):
assert(':' not in kind)
assert('@' not in kind)
assert('\n' not in s)
if pid == None: pid = os.getpid()
write('@@REDO:%s:%d:%.4f@@ %s'
% (kind, pid, time.time(), s))
def err(s):
s = s.rstrip()
meta('error', s)
def warn(s):
s = s.rstrip()
meta('warning', s)
def debug(s):
if vars.DEBUG >= 1:
s = s.rstrip()
meta('debug', s)
def debug2(s):
if vars.DEBUG >= 2:
s = s.rstrip()
meta('debug', s)
def debug3(s):
if vars.DEBUG >= 3:
s = s.rstrip()
meta('debug', s)

View file

@ -25,3 +25,4 @@ pages:
- redo-targets: redo-targets.md - redo-targets: redo-targets.md
- redo-ood: redo-ood.md - redo-ood: redo-ood.md
- redo-whichdo: redo-whichdo.md - redo-whichdo: redo-whichdo.md
- redo-log: redo-log.md

View file

@ -1,6 +1,6 @@
import os import os
import vars import vars
from log import err, debug2 from logs import err, debug2
def _default_do_files(filename): def _default_do_files(filename):

View file

@ -1,7 +1,7 @@
#!/usr/bin/env python2 #!/usr/bin/env python2
import sys, os import sys, os
import vars, state import vars, state
from log import err from logs import err
try: try:

View file

@ -1,12 +1,12 @@
#!/usr/bin/env python2 #!/usr/bin/env python2
import sys, os import sys, os, traceback
import vars_init import vars_init
vars_init.init(sys.argv[1:]) vars_init.init(sys.argv[1:])
import vars, state, builder, jwack, deps import vars, state, builder, jwack, deps
from helpers import unlink from helpers import unlink
from log import debug, debug2, err from logs import debug, debug2, err
def should_build(t): def should_build(t):
f = state.File(name=t) f = state.File(name=t)
@ -14,11 +14,14 @@ def should_build(t):
raise builder.ImmediateReturn(32) raise builder.ImmediateReturn(32)
dirty = deps.isdirty(f, depth = '', max_changed = vars.RUNID, dirty = deps.isdirty(f, depth = '', max_changed = vars.RUNID,
already_checked=[]) already_checked=[])
return dirty==[f] and deps.DIRTY or dirty return f.is_generated, dirty==[f] and deps.DIRTY or dirty
rv = 202 rv = 202
try: try:
if vars_init.is_toplevel:
builder.start_stdin_log_reader(status=True, details=True,
pretty=True, color=True, debug_locks=False, debug_pids=False)
if vars.TARGET and not vars.UNLOCKED: if vars.TARGET and not vars.UNLOCKED:
me = os.path.join(vars.STARTDIR, me = os.path.join(vars.STARTDIR,
os.path.join(vars.PWD, vars.TARGET)) os.path.join(vars.PWD, vars.TARGET))
@ -27,6 +30,7 @@ try:
else: else:
f = me = None f = me = None
debug2('redo-ifchange: not adding depends.\n') debug2('redo-ifchange: not adding depends.\n')
jwack.setup(1)
try: try:
targets = sys.argv[1:] targets = sys.argv[1:]
if f: if f:
@ -39,8 +43,17 @@ try:
try: try:
state.rollback() state.rollback()
finally: finally:
jwack.force_return_tokens() try:
jwack.force_return_tokens()
except Exception, e:
traceback.print_exc(100, sys.stderr)
err('unexpected error: %r\n' % e)
rv = 1
except KeyboardInterrupt: except KeyboardInterrupt:
if vars_init.is_toplevel:
builder.await_log_reader()
sys.exit(200) sys.exit(200)
state.commit() state.commit()
if vars_init.is_toplevel:
builder.await_log_reader()
sys.exit(rv) sys.exit(rv)

View file

@ -1,7 +1,7 @@
#!/usr/bin/env python2 #!/usr/bin/env python2
import sys, os import sys, os
import vars, state import vars, state
from log import err from logs import err
try: try:

1
redo-log Symbolic link
View file

@ -0,0 +1 @@
redo-log.py

257
redo-log.py Executable file
View file

@ -0,0 +1,257 @@
#!/usr/bin/env python
import errno, fcntl, os, re, struct, sys, termios, time
from atoi import atoi
import options
optspec = """
redo-log [options...] [targets...]
--
r,recursive show build logs for dependencies too
u,unchanged show lines for dependencies not needing to be rebuilt
f,follow keep watching for more lines to be appended (like tail -f)
no-details only show 'redo' recursion trace, not build output
no-status don't display build summary line in --follow
no-pretty don't pretty-print logs, show raw @@REDO output instead
no-color disable ANSI color; --color to force enable (default: auto)
debug-locks print messages about file locking (useful for debugging)
debug-pids print process ids in log messages (useful for debugging)
ack-fd= (internal use only) print REDO-OK to this fd upon starting
"""
o = options.Options(optspec)
(opt, flags, extra) = o.parse(sys.argv[1:])
targets = extra
import vars_init
vars_init.init(list(targets))
import vars, logs, state
topdir = os.getcwd()
already = set()
queue = []
depth = []
total_lines = 0
status = None
start_time = time.time()
# regexp for matching "redo" lines in the log, which we use for recursion.
# format:
# redo path/to/target which might have spaces
# redo [unchanged] path/to/target which might have spaces
# redo path/to/target which might have spaces (comment)
REDO_LINE_RE = re.compile(r'^@@REDO:([^@]+)@@ (.*)\n$')
def _atoi(s):
try:
return int(s)
except TypeError:
return 0
def _tty_width():
s = struct.pack("HHHH", 0, 0, 0, 0)
try:
import fcntl, termios
s = fcntl.ioctl(sys.stderr.fileno(), termios.TIOCGWINSZ, s)
except (IOError, ImportError):
return _atoi(os.environ.get('WIDTH')) or 70
(ysize,xsize,ypix,xpix) = struct.unpack('HHHH', s)
return xsize or 70
def is_locked(fid):
return (fid is not None) and not state.Lock(fid=fid).trylock()
def _fix_depth():
vars.DEPTH = len(depth) * ' '
def _rel(topdir, mydir, path):
return os.path.relpath(os.path.join(topdir, mydir, path), topdir)
def catlog(t):
global total_lines, status
if t in already:
return
if t != '-':
depth.append(t)
_fix_depth()
already.add(t)
mydir = os.path.dirname(t)
if t == '-':
f = sys.stdin
fid = None
loglock = None
logname = None
else:
try:
sf = state.File(name=t, allow_add=False)
except KeyError:
sys.stderr.write('redo-log: [%s] %r: not known to redo.\n'
% (os.getcwd(), t,))
sys.exit(24)
fid = sf.id
del sf
state.rollback()
logname = state.logname(fid)
loglock = state.Lock(fid + state.LOG_LOCK_MAGIC)
loglock.waitlock(shared=True)
f = None
delay = 0.01
was_locked = is_locked(fid)
line_head = ''
width = _tty_width()
while 1:
if not f:
try:
f = open(logname)
except IOError, e:
if e.errno == errno.ENOENT:
# ignore files without logs
pass
else:
raise
if f:
# Note: normally includes trailing \n.
# In 'follow' mode, might get a line with no trailing \n
# (eg. when ./configure is halfway through a test), which we
# deal with below.
line = f.readline()
else:
line = None
if not line and (not opt.follow or not was_locked):
# file not locked, and no new lines: done
break
if not line:
was_locked = is_locked(fid)
if opt.follow:
# Don't display status line for extremely short-lived runs
if opt.status and time.time() - start_time > 1.0:
width = _tty_width()
head = 'redo %s ' % ('{:,}'.format(total_lines))
tail = ''
for n in reversed(depth):
remain = width - len(head) - len(tail)
# always leave room for a final '... ' prefix
if remain < len(n) + 4 + 1 or remain <= 4:
if len(n) < 6 or remain < 6 + 1 + 4:
tail = '... %s' % tail
else:
start = len(n) - (remain - 3 - 1)
tail = '...%s %s' % (n[start:], tail)
break
elif n != '-':
tail = n + ' ' + tail
status = head + tail
if len(status) > width:
sys.stderr.write('\nOVERSIZE STATUS (%d):\n%r\n' %
(len(status), status))
assert(len(status) <= width)
sys.stdout.flush()
sys.stderr.write('\r%-*.*s\r' % (width, width, status))
time.sleep(min(delay, 1.0))
delay += 0.01
continue
total_lines += 1
delay = 0.01
if not line.endswith('\n'):
line_head += line
continue
if line_head:
line = line_head + line
line_head = ''
if status:
sys.stdout.flush()
sys.stderr.write('\r%-*.*s\r' % (width, width, ''))
status = None
g = re.match(REDO_LINE_RE, line)
if g:
# FIXME: print prefix if @@REDO is not at start of line.
# logs.PrettyLog does it, but only if we actually call .write().
words, text = g.groups()
kind, pid, when = words.split(':')[0:3]
pid = atoi(pid)
relname = _rel(topdir, mydir, text)
fixname = os.path.normpath(os.path.join(mydir, text))
if kind == 'unchanged':
if opt.unchanged:
if opt.debug_locks:
logs.meta(kind, relname, pid=pid)
elif fixname not in already:
logs.meta('do', relname, pid=pid)
if opt.recursive:
if loglock: loglock.unlock()
catlog(os.path.join(mydir, text))
if loglock: loglock.waitlock(shared=True)
already.add(fixname)
elif kind in ('do', 'waiting', 'locked', 'unlocked'):
if opt.debug_locks:
logs.meta(kind, relname, pid=pid)
logs.write(line.rstrip())
elif fixname not in already:
logs.meta('do', relname, pid=pid)
if opt.recursive:
assert text
if loglock: loglock.unlock()
catlog(os.path.join(mydir, text))
if loglock: loglock.waitlock(shared=True)
already.add(fixname)
elif kind == 'done':
rv, name = text.split(' ', 1)
logs.meta(kind, rv + ' ' + _rel(topdir, mydir, name))
else:
logs.write(line.rstrip())
else:
if opt.details:
logs.write(line.rstrip())
if loglock:
loglock.unlock()
if status:
sys.stdout.flush()
sys.stderr.write('\r%-*.*s\r' % (width, width, ''))
status = None
if line_head:
# partial line never got terminated
print line_head
if t != '-':
assert(depth[-1] == t)
depth.pop(-1)
_fix_depth()
try:
if not targets:
sys.stderr.write('redo-log: give at least one target; maybe "all"?\n')
sys.exit(1)
if opt.status < 2 and not os.isatty(2):
opt.status = False
logs.setup(file=sys.stdout, pretty=opt.pretty, color=opt.color)
if opt.debug_locks:
vars.DEBUG_LOCKS = 1
if opt.debug_pids:
vars.DEBUG_PIDS = 1
if opt.ack_fd:
# Write back to owner, to let them know we started up okay and
# will be able to see their error output, so it's okay to close
# their old stderr.
ack_fd = int(opt.ack_fd)
assert(ack_fd > 2)
if os.write(ack_fd, 'REDO-OK\n') != 8:
raise Exception('write to ack_fd returned wrong length')
os.close(ack_fd)
queue += targets
while queue:
t = queue.pop(0)
if t != '-':
logs.meta('do', _rel(topdir, '.', t), pid=0)
catlog(t)
except KeyboardInterrupt:
sys.exit(200)
except IOError, e:
if e.errno == errno.EPIPE:
pass
else:
raise

View file

@ -5,7 +5,7 @@ import vars_init
vars_init.init([]) vars_init.init([])
import vars, state, deps import vars, state, deps
from log import err from logs import err
if len(sys.argv[1:]) != 0: if len(sys.argv[1:]) != 0:
err('%s: no arguments expected.\n' % sys.argv[0]) err('%s: no arguments expected.\n' % sys.argv[0])

View file

@ -5,7 +5,7 @@ import vars_init
vars_init.init([]) vars_init.init([])
import state import state
from log import err from logs import err
if len(sys.argv[1:]) != 0: if len(sys.argv[1:]) != 0:
err('%s: no arguments expected.\n' % sys.argv[0]) err('%s: no arguments expected.\n' % sys.argv[0])

View file

@ -1,7 +1,7 @@
#!/usr/bin/env python2 #!/usr/bin/env python2
import sys, os import sys, os
import vars, state import vars, state
from log import err, debug2 from logs import err, debug2
if len(sys.argv) > 1: if len(sys.argv) > 1:
err('%s: no arguments expected.\n' % sys.argv[0]) err('%s: no arguments expected.\n' % sys.argv[0])

View file

@ -5,7 +5,7 @@ import vars_init
vars_init.init([]) vars_init.init([])
import state import state
from log import err from logs import err
if len(sys.argv[1:]) != 0: if len(sys.argv[1:]) != 0:
err('%s: no arguments expected.\n' % sys.argv[0]) err('%s: no arguments expected.\n' % sys.argv[0])

View file

@ -1,7 +1,7 @@
#!/usr/bin/env python2 #!/usr/bin/env python2
import sys, os import sys, os
import state import state
from log import err from logs import err
if len(sys.argv[1:]) < 2: if len(sys.argv[1:]) < 2:
err('%s: at least 2 arguments expected.\n' % sys.argv[0]) err('%s: at least 2 arguments expected.\n' % sys.argv[0])

View file

@ -5,7 +5,7 @@ import vars_init
vars_init.init_no_state() vars_init.init_no_state()
import paths import paths
from log import err from logs import err
if len(sys.argv[1:]) != 1: if len(sys.argv[1:]) != 1:
err('%s: exactly one argument expected.\n' % sys.argv[0]) err('%s: exactly one argument expected.\n' % sys.argv[0])

36
redo.py
View file

@ -1,5 +1,5 @@
#!/usr/bin/env python2 #!/usr/bin/env python2
import sys, os import sys, os, traceback
import options import options
from helpers import atoi from helpers import atoi
@ -12,6 +12,11 @@ v,verbose print commands as they are read from .do files (variables intact)
x,xtrace print commands as they are executed (variables expanded) x,xtrace print commands as they are executed (variables expanded)
k,keep-going keep going as long as possible even if some targets fail k,keep-going keep going as long as possible even if some targets fail
shuffle randomize the build order to find dependency bugs shuffle randomize the build order to find dependency bugs
no-details only show 'redo' recursion trace (to see more later, use redo-log)
no-status don't display build summary line at the bottom of the screen
no-log don't capture error output, just let it flow straight to stderr
no-pretty don't pretty-print logs, show raw @@REDO output instead
no-color disable ANSI color; --color to force enable (default: auto)
debug-locks print messages about file locking (useful for debugging) debug-locks print messages about file locking (useful for debugging)
debug-pids print process ids as part of log messages (useful for debugging) debug-pids print process ids as part of log messages (useful for debugging)
version print the current version and exit version print the current version and exit
@ -40,13 +45,27 @@ if opt.debug_locks:
if opt.debug_pids: if opt.debug_pids:
os.environ['REDO_DEBUG_PIDS'] = '1' os.environ['REDO_DEBUG_PIDS'] = '1'
# This is slightly tricky: the log and pretty options default to true. We
# want to inherit that 'true' value from parent processes *unless* someone
# explicitly specifies the reverse.
if opt.no_log:
os.environ['REDO_LOG'] = '0'
if opt.no_pretty:
os.environ['REDO_PRETTY'] = '0'
if opt.no_color:
os.environ['REDO_COLOR'] = '0'
import vars_init import vars_init
vars_init.init(targets) vars_init.init(targets)
import vars, state, builder, jwack import vars, state, builder, jwack
from log import warn, err from logs import warn, err
try: try:
if vars_init.is_toplevel:
builder.start_stdin_log_reader(status=opt.status, details=opt.details,
pretty=opt.pretty, color=opt.color,
debug_locks=opt.debug_locks, debug_pids=opt.debug_pids)
for t in targets: for t in targets:
if os.path.exists(t): if os.path.exists(t):
f = state.File(name=t) f = state.File(name=t)
@ -61,13 +80,22 @@ try:
jwack.setup(j) jwack.setup(j)
try: try:
assert(state.is_flushed()) assert(state.is_flushed())
retcode = builder.main(targets, lambda t: True) retcode = builder.main(targets, lambda t: (True, True))
assert(state.is_flushed()) assert(state.is_flushed())
finally: finally:
try: try:
state.rollback() state.rollback()
finally: finally:
jwack.force_return_tokens() try:
jwack.force_return_tokens()
except Exception, e:
traceback.print_exc(100, sys.stderr)
err('unexpected error: %r\n' % e)
retcode = 1
if vars_init.is_toplevel:
builder.await_log_reader()
sys.exit(retcode) sys.exit(retcode)
except KeyboardInterrupt: except KeyboardInterrupt:
if vars_init.is_toplevel:
builder.await_log_reader()
sys.exit(200) sys.exit(200)

View file

@ -1,7 +1,7 @@
import sys, os, errno, glob, stat, fcntl, sqlite3 import sys, os, errno, glob, stat, fcntl, sqlite3
import vars import vars
from helpers import unlink, close_on_exec, join from helpers import unlink, close_on_exec, join
from log import warn, err, debug2, debug3 from logs import warn, err, debug2, debug3
# When the module is imported, change the process title. # When the module is imported, change the process title.
# We do it here because this module is imported by all the scripts. # We do it here because this module is imported by all the scripts.
@ -21,6 +21,8 @@ ALWAYS='//ALWAYS' # an invalid filename that is always marked as dirty
STAMP_DIR='dir' # the stamp of a directory; mtime is unhelpful STAMP_DIR='dir' # the stamp of a directory; mtime is unhelpful
STAMP_MISSING='0' # the stamp of a nonexistent file STAMP_MISSING='0' # the stamp of a nonexistent file
LOG_LOCK_MAGIC=0x10000000 # fid offset for "log locks"
class CyclicDependencyError(Exception): pass class CyclicDependencyError(Exception): pass
@ -163,6 +165,21 @@ def relpath(t, base):
return join('/', tparts) return join('/', tparts)
# Return a path for t, if cwd were the dirname of vars.TARGET.
# This is tricky! STARTDIR+PWD is the directory for the *dofile*, when
# the dofile was started. However, inside the dofile, someone may have done
# a chdir to anywhere else. vars.TARGET is relative to the dofile path, so
# we have to first figure out where the dofile was, then find TARGET relative
# to that, then find t relative to that.
#
# FIXME: find some cleaner terminology for all these different paths.
def target_relpath(t):
dofile_dir = os.path.abspath(os.path.join(vars.STARTDIR, vars.PWD))
target_dir = os.path.abspath(
os.path.dirname(os.path.join(dofile_dir, vars.TARGET)))
return relpath(t, target_dir)
def warn_override(name): def warn_override(name):
warn('%s - you modified it; skipping\n' % name) warn('%s - you modified it; skipping\n' % name)
@ -174,7 +191,7 @@ class File(object):
# use this mostly to avoid accidentally assigning to typos # use this mostly to avoid accidentally assigning to typos
__slots__ = ['id'] + _file_cols[1:] __slots__ = ['id'] + _file_cols[1:]
def _init_from_idname(self, id, name): def _init_from_idname(self, id, name, allow_add):
q = ('select %s from Files ' % join(', ', _file_cols)) q = ('select %s from Files ' % join(', ', _file_cols))
if id != None: if id != None:
q += 'where rowid=?' q += 'where rowid=?'
@ -189,7 +206,9 @@ class File(object):
row = d.execute(q, l).fetchone() row = d.execute(q, l).fetchone()
if not row: if not row:
if not name: if not name:
raise Exception('No file with id=%r name=%r' % (id, name)) raise KeyError('No file with id=%r name=%r' % (id, name))
elif not allow_add:
raise KeyError('No file with name=%r' % (name,))
try: try:
_write('insert into Files (name) values (?)', [name]) _write('insert into Files (name) values (?)', [name])
except sqlite3.IntegrityError: except sqlite3.IntegrityError:
@ -207,17 +226,17 @@ class File(object):
if self.name == ALWAYS and self.changed_runid < vars.RUNID: if self.name == ALWAYS and self.changed_runid < vars.RUNID:
self.changed_runid = vars.RUNID self.changed_runid = vars.RUNID
def __init__(self, id=None, name=None, cols=None): def __init__(self, id=None, name=None, cols=None, allow_add=True):
if cols: if cols:
return self._init_from_cols(cols) return self._init_from_cols(cols)
else: else:
return self._init_from_idname(id, name) return self._init_from_idname(id, name, allow_add=allow_add)
def __repr__(self): def __repr__(self):
return "File(%r)" % (self.nicename(),) return "File(%r)" % (self.nicename(),)
def refresh(self): def refresh(self):
self._init_from_idname(self.id, None) self._init_from_idname(self.id, None, allow_add=False)
def save(self): def save(self):
cols = join(', ', ['%s=?'%i for i in _file_cols[2:]]) cols = join(', ', ['%s=?'%i for i in _file_cols[2:]])
@ -324,6 +343,11 @@ def files():
yield File(cols=cols) yield File(cols=cols)
def logname(fid):
"""Given the id of a File, return the filename of its build log."""
return os.path.join(vars.BASE, '.redo', 'log.%d' % fid)
# FIXME: I really want to use fcntl F_SETLK, F_SETLKW, etc here. But python # FIXME: I really want to use fcntl F_SETLK, F_SETLKW, etc here. But python
# doesn't do the lockdata structure in a portable way, so we have to use # doesn't do the lockdata structure in a portable way, so we have to use
# fcntl.lockf() instead. Usually this is just a wrapper for fcntl, so it's # fcntl.lockf() instead. Usually this is just a wrapper for fcntl, so it's
@ -365,10 +389,13 @@ class Lock:
raise raise
else: else:
self.owned = True self.owned = True
return self.owned
def waitlock(self): def waitlock(self, shared=False):
self.check() self.check()
fcntl.lockf(self.lockfile, fcntl.LOCK_EX, 0, 0) fcntl.lockf(self.lockfile,
fcntl.LOCK_SH if shared else fcntl.LOCK_EX,
0, 0)
self.owned = True self.owned = True
def unlock(self): def unlock(self):

3
t/370-logs/.gitignore vendored Normal file
View file

@ -0,0 +1,3 @@
pid
x
y

5
t/370-logs/a/b/xlog.do Normal file
View file

@ -0,0 +1,5 @@
read pid <../../pid
# Test that log retrieval works correctly when run from a different base dir.
redo-log -ru ../../x | grep -q "^$pid x stderr" || exit 45
redo-log -ru ../../x | grep -q "^$pid y stderr" || exit 46

52
t/370-logs/all.do Normal file
View file

@ -0,0 +1,52 @@
exec >&2
rm -f x pid
pid=$$
echo "$pid" >pid
xout=$(redo x)
[ "$(printf "$xout" | wc -l)" -eq 0 ] || exit 2
if [ -n "$REDO_LOG" ]; then
# redo has redo-log support enabled, so check that it saves logs.
# recursive log dump should show both x and y stderr.
redo-log -ru x | grep -q "^$pid x stderr" || exit 10
redo-log -ru x | grep -q "^$pid y stderr" || exit 11
# stdout captured by redo into the files x and y, *not* to log
redo-log -ru x | grep -q "^$pid x stdout" && exit 20
redo-log -ru x | grep -q "^$pid y stdout" && exit 21
[ "$(cat x)" = "$pid x stdout" ] || exit 22
[ "$(cat y)" = "$pid y stdout" ] || exit 23
# non-recursive log dump of x should *not* include y
redo-log x | grep -q "^$pid y stdout" && exit 30
redo-log x | grep -q "^$pid y stderr" && exit 31
redo a/b/xlog
(cd a && redo b/xlog)
# Test retrieval from a different $PWD.
(
cd a/b || exit 40
redo-log -ru ../../x | grep -q "^$pid x stderr" || exit 41
redo-log -ru ../../x | grep -q "^$pid y stderr" || exit 42
) || exit
fi
# whether or not redo-log is available, redirecting stderr should work.
pid=$$-bork
rm -f x pid
echo "$pid" >pid
out=$(redo x 2>&1)
# x's stderr should obviously go where we sent it
echo "$out" | grep -q "^$pid x stderr" || exit 50
# This one is actually tricky: with redo-log, x's call to 'redo y' would
# normally implicitly redirect y's stderr to a new log. redo needs to
# detect that we've already redirected it where we want, and not take it
# away.
echo "$out" | grep -q "^$pid y stderr" || exit 51
exit 0

1
t/370-logs/clean.do Normal file
View file

@ -0,0 +1 @@
rm -f *~ .*~ all x y pid

5
t/370-logs/x.do Normal file
View file

@ -0,0 +1,5 @@
read pid <pid
echo "$pid x stdout"
echo "$pid x stderr" >&2
rm -f y
redo y

3
t/370-logs/y.do Normal file
View file

@ -0,0 +1,3 @@
read pid <pid
echo "$pid y stdout"
echo "$pid y stderr" >&2

View file

@ -16,6 +16,11 @@ DEBUG_PIDS = os.environ.get('REDO_DEBUG_PIDS', '') and 1 or 0
VERBOSE = os.environ.get('REDO_VERBOSE', '') and 1 or 0 VERBOSE = os.environ.get('REDO_VERBOSE', '') and 1 or 0
XTRACE = os.environ.get('REDO_XTRACE', '') and 1 or 0 XTRACE = os.environ.get('REDO_XTRACE', '') and 1 or 0
KEEP_GOING = os.environ.get('REDO_KEEP_GOING', '') and 1 or 0 KEEP_GOING = os.environ.get('REDO_KEEP_GOING', '') and 1 or 0
LOG = atoi(os.environ.get('REDO_LOG', '1')) # defaults on
LOG_INODE = os.environ.get('REDO_LOG_INODE', '')
COLOR = atoi(os.environ.get('REDO_COLOR', '1')) # defaults on
# subprocesses mustn't pretty-print if a parent is running redo-log
PRETTY = (not LOG) and atoi(os.environ.get('REDO_PRETTY', '1'))
SHUFFLE = os.environ.get('REDO_SHUFFLE', '') and 1 or 0 SHUFFLE = os.environ.get('REDO_SHUFFLE', '') and 1 or 0
STARTDIR = os.environ.get('REDO_STARTDIR', '') STARTDIR = os.environ.get('REDO_STARTDIR', '')
RUNID = atoi(os.environ.get('REDO_RUNID')) or None RUNID = atoi(os.environ.get('REDO_RUNID')) or None

View file

@ -1,9 +1,13 @@
import sys, os import sys, os
is_toplevel = False
def init_no_state(): def init_no_state():
if not os.environ.get('REDO'): if not os.environ.get('REDO'):
os.environ['REDO'] = 'NOT_DEFINED' os.environ['REDO'] = 'NOT_DEFINED'
is_toplevel = True
if not os.environ.get('REDO_BASE'): if not os.environ.get('REDO_BASE'):
os.environ['REDO_BASE'] = 'NOT_DEFINED' os.environ['REDO_BASE'] = 'NOT_DEFINED'
@ -11,6 +15,8 @@ def init_no_state():
def init(targets): def init(targets):
if not os.environ.get('REDO'): if not os.environ.get('REDO'):
# toplevel call to redo # toplevel call to redo
global is_toplevel
is_toplevel = True
if len(targets) == 0: if len(targets) == 0:
targets.append('all') targets.append('all')
exenames = [os.path.abspath(sys.argv[0]), exenames = [os.path.abspath(sys.argv[0]),

View file

@ -1,2 +1 @@
redo-ifchange vars _version.py redo-ifchange vars _version.py