2018-10-29 07:21:12 +00:00
|
|
|
import sys, os, errno, random, stat, signal, time
|
2018-10-30 23:23:04 -04:00
|
|
|
import vars, jwack, state, paths
|
2010-12-19 05:47:38 -08:00
|
|
|
from helpers import unlink, close_on_exec, join
|
Raw logs contain @@REDO lines instead of formatted data.
This makes them more reliable to parse. redo-log can parse each line,
format and print it, then recurse if necessary. This got a little ugly
because I wanted 'redo --raw-logs' to work, which we want to format the
output nicely, but not call redo-log.
(As a result, --raw-logs has a different meaning to redo and
redo-log, which is kinda dumb. I should fix that.)
As an added bonus, redo-log now handles indenting of recursive logs, so
if the build was a -> a/b -> a/b/c, and you look at the log for a/b, it
can still start at the top level indentation.
2018-11-13 04:05:42 -05:00
|
|
|
import logs
|
|
|
|
|
from logs import debug, debug2, err, warn, meta, check_tty
|
2010-11-19 07:21:09 -08:00
|
|
|
|
|
|
|
|
|
2010-11-21 03:34:32 -08:00
|
|
|
def _nice(t):
|
2010-12-09 03:29:34 -08:00
|
|
|
return state.relpath(t, vars.STARTDIR)
|
2010-11-21 03:34:32 -08:00
|
|
|
|
|
|
|
|
|
2010-11-22 04:40:54 -08:00
|
|
|
def _try_stat(filename):
|
|
|
|
|
try:
|
2018-10-06 00:14:02 -04:00
|
|
|
return os.lstat(filename)
|
2010-11-22 04:40:54 -08:00
|
|
|
except OSError, e:
|
|
|
|
|
if e.errno == errno.ENOENT:
|
|
|
|
|
return None
|
|
|
|
|
else:
|
|
|
|
|
raise
|
|
|
|
|
|
|
|
|
|
|
redo-log: capture and linearize the output of redo builds.
redo now saves the stderr from every .do script, for every target, into
a file in the .redo directory. That means you can look up the logs
from the most recent build of any target using the new redo-log
command, for example:
redo-log -r all
The default is to show logs non-recursively, that is, it'll show when a
target does redo-ifchange on another target, but it won't recurse into
the logs for the latter target. With -r (recursive), it does. With -u
(unchanged), it does even if redo-ifchange discovered that the target
was already up-to-date; in that case, it prints the logs of the *most
recent* time the target was generated.
With --no-details, redo-log will show only the 'redo' lines, not the
other log messages. For very noisy build systems (like recursing into
a 'make' instance) this can be helpful to get an overview of what
happened, without all the cruft.
You can use the -f (follow) option like tail -f, to follow a build
that's currently in progress until it finishes. redo itself spins up a
copy of redo-log -r -f while it runs, so you can see what's going on.
Still broken in this version:
- No man page or new tests yet.
- ANSI colors don't yet work (unless you use --raw-logs, which gives
the old-style behaviour).
- You can't redirect the output of a sub-redo to a file or a
pipe right now, because redo-log is eating it.
- The regex for matching 'redo' lines in the log is very gross.
Instead, we should put the raw log files in a more machine-parseable
format, and redo-log should turn that into human-readable format.
- redo-log tries to "linearize" the logs, which makes them
comprehensible even for a large parallel build. It recursively shows
log messages for each target in depth-first tree order (by tracing
into a new target every time it sees a 'redo' line). This works
really well, but in some specific cases, the "topmost" redo instance
can get stuck waiting for a jwack token, which makes it look like the
whole build has stalled, when really redo-log is just waiting a long
time for a particular subprocess to be able to continue. We'll need to
add a specific workaround for that.
2018-11-03 22:09:18 -04:00
|
|
|
log_reader_pid = None
|
|
|
|
|
|
|
|
|
|
|
2018-11-19 11:22:53 -05:00
|
|
|
def start_stdin_log_reader(status, details, pretty, color,
|
|
|
|
|
debug_locks, debug_pids):
|
2018-11-19 10:55:56 -05:00
|
|
|
if not vars.LOG: return
|
redo-log: capture and linearize the output of redo builds.
redo now saves the stderr from every .do script, for every target, into
a file in the .redo directory. That means you can look up the logs
from the most recent build of any target using the new redo-log
command, for example:
redo-log -r all
The default is to show logs non-recursively, that is, it'll show when a
target does redo-ifchange on another target, but it won't recurse into
the logs for the latter target. With -r (recursive), it does. With -u
(unchanged), it does even if redo-ifchange discovered that the target
was already up-to-date; in that case, it prints the logs of the *most
recent* time the target was generated.
With --no-details, redo-log will show only the 'redo' lines, not the
other log messages. For very noisy build systems (like recursing into
a 'make' instance) this can be helpful to get an overview of what
happened, without all the cruft.
You can use the -f (follow) option like tail -f, to follow a build
that's currently in progress until it finishes. redo itself spins up a
copy of redo-log -r -f while it runs, so you can see what's going on.
Still broken in this version:
- No man page or new tests yet.
- ANSI colors don't yet work (unless you use --raw-logs, which gives
the old-style behaviour).
- You can't redirect the output of a sub-redo to a file or a
pipe right now, because redo-log is eating it.
- The regex for matching 'redo' lines in the log is very gross.
Instead, we should put the raw log files in a more machine-parseable
format, and redo-log should turn that into human-readable format.
- redo-log tries to "linearize" the logs, which makes them
comprehensible even for a large parallel build. It recursively shows
log messages for each target in depth-first tree order (by tracing
into a new target every time it sees a 'redo' line). This works
really well, but in some specific cases, the "topmost" redo instance
can get stuck waiting for a jwack token, which makes it look like the
whole build has stalled, when really redo-log is just waiting a long
time for a particular subprocess to be able to continue. We'll need to
add a specific workaround for that.
2018-11-03 22:09:18 -04:00
|
|
|
global log_reader_pid
|
|
|
|
|
r, w = os.pipe() # main pipe to redo-log
|
|
|
|
|
ar, aw = os.pipe() # ack pipe from redo-log --ack-fd
|
|
|
|
|
sys.stdout.flush()
|
|
|
|
|
sys.stderr.flush()
|
|
|
|
|
pid = os.fork()
|
|
|
|
|
if pid:
|
|
|
|
|
# parent
|
|
|
|
|
log_reader_pid = pid
|
|
|
|
|
os.close(r)
|
|
|
|
|
os.close(aw)
|
|
|
|
|
b = os.read(ar, 8)
|
|
|
|
|
if not b:
|
|
|
|
|
# subprocess died without sending us anything: that's bad.
|
|
|
|
|
err('failed to start redo-log subprocess; cannot continue.\n')
|
|
|
|
|
os._exit(99)
|
|
|
|
|
assert b == 'REDO-OK\n'
|
|
|
|
|
# now we know the subproc is running and will report our errors
|
|
|
|
|
# to stderr, so it's okay to lose our own stderr.
|
|
|
|
|
os.close(ar)
|
|
|
|
|
os.dup2(w, 1)
|
|
|
|
|
os.dup2(w, 2)
|
|
|
|
|
os.close(w)
|
2018-11-19 11:22:53 -05:00
|
|
|
check_tty(sys.stderr, vars.COLOR)
|
redo-log: capture and linearize the output of redo builds.
redo now saves the stderr from every .do script, for every target, into
a file in the .redo directory. That means you can look up the logs
from the most recent build of any target using the new redo-log
command, for example:
redo-log -r all
The default is to show logs non-recursively, that is, it'll show when a
target does redo-ifchange on another target, but it won't recurse into
the logs for the latter target. With -r (recursive), it does. With -u
(unchanged), it does even if redo-ifchange discovered that the target
was already up-to-date; in that case, it prints the logs of the *most
recent* time the target was generated.
With --no-details, redo-log will show only the 'redo' lines, not the
other log messages. For very noisy build systems (like recursing into
a 'make' instance) this can be helpful to get an overview of what
happened, without all the cruft.
You can use the -f (follow) option like tail -f, to follow a build
that's currently in progress until it finishes. redo itself spins up a
copy of redo-log -r -f while it runs, so you can see what's going on.
Still broken in this version:
- No man page or new tests yet.
- ANSI colors don't yet work (unless you use --raw-logs, which gives
the old-style behaviour).
- You can't redirect the output of a sub-redo to a file or a
pipe right now, because redo-log is eating it.
- The regex for matching 'redo' lines in the log is very gross.
Instead, we should put the raw log files in a more machine-parseable
format, and redo-log should turn that into human-readable format.
- redo-log tries to "linearize" the logs, which makes them
comprehensible even for a large parallel build. It recursively shows
log messages for each target in depth-first tree order (by tracing
into a new target every time it sees a 'redo' line). This works
really well, but in some specific cases, the "topmost" redo instance
can get stuck waiting for a jwack token, which makes it look like the
whole build has stalled, when really redo-log is just waiting a long
time for a particular subprocess to be able to continue. We'll need to
add a specific workaround for that.
2018-11-03 22:09:18 -04:00
|
|
|
else:
|
|
|
|
|
# child
|
|
|
|
|
try:
|
|
|
|
|
os.close(ar)
|
|
|
|
|
os.close(w)
|
|
|
|
|
os.dup2(r, 0)
|
|
|
|
|
os.close(r)
|
redo-log: fix stdout vs stderr; don't recapture if .do script redirects stderr.
redo-log should log to stdout, because when you ask for the specific
logs from a run, the logs are the output you requested. redo-log's
stderr should be about any errors retrieving that output.
On the other hand, when you run redo, the logs are literally the stderr
of the build steps, which are incidental to the main job (building
things). So that should be send to stderr. Previously, we were
sending to stderr when --no-log, but stdout when --log, which is
totally wrong.
Also, adding redo-log had the unexpected result that if a .do script
redirected the stderr of a sub-redo or redo-ifchange to a file or pipe,
the output would be eaten by redo-log instead of the intended output.
So a test runner like this:
self.test:
redo self.runtest 2>&1 | grep ERROR
would not work; the self.runtest output would be sent to redo's log
buffer (and from there, probably printed to the toplevel redo's stderr)
rather than passed along to grep.
2018-11-19 16:27:41 -05:00
|
|
|
# redo-log sends to stdout (because if you ask for logs, that's
|
|
|
|
|
# the output you wanted!). But redo itself sends logs to stderr
|
|
|
|
|
# (because they're incidental to the thing you asked for).
|
|
|
|
|
# To make these semantics work, we point redo-log's stdout at
|
|
|
|
|
# our stderr when we launch it.
|
|
|
|
|
os.dup2(2, 1)
|
redo-log: capture and linearize the output of redo builds.
redo now saves the stderr from every .do script, for every target, into
a file in the .redo directory. That means you can look up the logs
from the most recent build of any target using the new redo-log
command, for example:
redo-log -r all
The default is to show logs non-recursively, that is, it'll show when a
target does redo-ifchange on another target, but it won't recurse into
the logs for the latter target. With -r (recursive), it does. With -u
(unchanged), it does even if redo-ifchange discovered that the target
was already up-to-date; in that case, it prints the logs of the *most
recent* time the target was generated.
With --no-details, redo-log will show only the 'redo' lines, not the
other log messages. For very noisy build systems (like recursing into
a 'make' instance) this can be helpful to get an overview of what
happened, without all the cruft.
You can use the -f (follow) option like tail -f, to follow a build
that's currently in progress until it finishes. redo itself spins up a
copy of redo-log -r -f while it runs, so you can see what's going on.
Still broken in this version:
- No man page or new tests yet.
- ANSI colors don't yet work (unless you use --raw-logs, which gives
the old-style behaviour).
- You can't redirect the output of a sub-redo to a file or a
pipe right now, because redo-log is eating it.
- The regex for matching 'redo' lines in the log is very gross.
Instead, we should put the raw log files in a more machine-parseable
format, and redo-log should turn that into human-readable format.
- redo-log tries to "linearize" the logs, which makes them
comprehensible even for a large parallel build. It recursively shows
log messages for each target in depth-first tree order (by tracing
into a new target every time it sees a 'redo' line). This works
really well, but in some specific cases, the "topmost" redo instance
can get stuck waiting for a jwack token, which makes it look like the
whole build has stalled, when really redo-log is just waiting a long
time for a particular subprocess to be able to continue. We'll need to
add a specific workaround for that.
2018-11-03 22:09:18 -04:00
|
|
|
argv = [
|
|
|
|
|
'redo-log',
|
|
|
|
|
'--recursive', '--follow',
|
|
|
|
|
'--ack-fd', str(aw),
|
Raw logs contain @@REDO lines instead of formatted data.
This makes them more reliable to parse. redo-log can parse each line,
format and print it, then recurse if necessary. This got a little ugly
because I wanted 'redo --raw-logs' to work, which we want to format the
output nicely, but not call redo-log.
(As a result, --raw-logs has a different meaning to redo and
redo-log, which is kinda dumb. I should fix that.)
As an added bonus, redo-log now handles indenting of recursive logs, so
if the build was a -> a/b -> a/b/c, and you look at the log for a/b, it
can still start at the top level indentation.
2018-11-13 04:05:42 -05:00
|
|
|
('--status' if status and os.isatty(2) else '--no-status'),
|
redo-log: capture and linearize the output of redo builds.
redo now saves the stderr from every .do script, for every target, into
a file in the .redo directory. That means you can look up the logs
from the most recent build of any target using the new redo-log
command, for example:
redo-log -r all
The default is to show logs non-recursively, that is, it'll show when a
target does redo-ifchange on another target, but it won't recurse into
the logs for the latter target. With -r (recursive), it does. With -u
(unchanged), it does even if redo-ifchange discovered that the target
was already up-to-date; in that case, it prints the logs of the *most
recent* time the target was generated.
With --no-details, redo-log will show only the 'redo' lines, not the
other log messages. For very noisy build systems (like recursing into
a 'make' instance) this can be helpful to get an overview of what
happened, without all the cruft.
You can use the -f (follow) option like tail -f, to follow a build
that's currently in progress until it finishes. redo itself spins up a
copy of redo-log -r -f while it runs, so you can see what's going on.
Still broken in this version:
- No man page or new tests yet.
- ANSI colors don't yet work (unless you use --raw-logs, which gives
the old-style behaviour).
- You can't redirect the output of a sub-redo to a file or a
pipe right now, because redo-log is eating it.
- The regex for matching 'redo' lines in the log is very gross.
Instead, we should put the raw log files in a more machine-parseable
format, and redo-log should turn that into human-readable format.
- redo-log tries to "linearize" the logs, which makes them
comprehensible even for a large parallel build. It recursively shows
log messages for each target in depth-first tree order (by tracing
into a new target every time it sees a 'redo' line). This works
really well, but in some specific cases, the "topmost" redo instance
can get stuck waiting for a jwack token, which makes it look like the
whole build has stalled, when really redo-log is just waiting a long
time for a particular subprocess to be able to continue. We'll need to
add a specific workaround for that.
2018-11-03 22:09:18 -04:00
|
|
|
('--details' if details else '--no-details'),
|
2018-11-19 10:55:56 -05:00
|
|
|
('--pretty' if pretty else '--no-pretty'),
|
redo-log: prioritize the "foreground" process.
When running a parallel build, redo-log -f (which is auto-started by
redo) tries to traverse through the logs depth first, in the order
parent processes started subprocesses. This works pretty well, but if
its dependencies are locked, a process might have to give up its
jobserver token while other stuff builds its dependencies. After the
dependency finishes, the parent might not be able to get a token for
quite some time, and the logs will appear to stop.
To prevent this from happening, we can instantiate up to one "cheater"
token, only in the foreground process (the one locked by redo-log -f),
which will allow it to continue running, albeit a bit slowly (since it
only has one token out of possibly many). When the process finishes,
we then destroy the fake token. It gets a little complicated; see
explanation at the top of jwack.py.
2018-11-17 04:32:09 -05:00
|
|
|
('--debug-locks' if debug_locks else '--no-debug-locks'),
|
|
|
|
|
('--debug-pids' if debug_pids else '--no-debug-pids'),
|
redo-log: capture and linearize the output of redo builds.
redo now saves the stderr from every .do script, for every target, into
a file in the .redo directory. That means you can look up the logs
from the most recent build of any target using the new redo-log
command, for example:
redo-log -r all
The default is to show logs non-recursively, that is, it'll show when a
target does redo-ifchange on another target, but it won't recurse into
the logs for the latter target. With -r (recursive), it does. With -u
(unchanged), it does even if redo-ifchange discovered that the target
was already up-to-date; in that case, it prints the logs of the *most
recent* time the target was generated.
With --no-details, redo-log will show only the 'redo' lines, not the
other log messages. For very noisy build systems (like recursing into
a 'make' instance) this can be helpful to get an overview of what
happened, without all the cruft.
You can use the -f (follow) option like tail -f, to follow a build
that's currently in progress until it finishes. redo itself spins up a
copy of redo-log -r -f while it runs, so you can see what's going on.
Still broken in this version:
- No man page or new tests yet.
- ANSI colors don't yet work (unless you use --raw-logs, which gives
the old-style behaviour).
- You can't redirect the output of a sub-redo to a file or a
pipe right now, because redo-log is eating it.
- The regex for matching 'redo' lines in the log is very gross.
Instead, we should put the raw log files in a more machine-parseable
format, and redo-log should turn that into human-readable format.
- redo-log tries to "linearize" the logs, which makes them
comprehensible even for a large parallel build. It recursively shows
log messages for each target in depth-first tree order (by tracing
into a new target every time it sees a 'redo' line). This works
really well, but in some specific cases, the "topmost" redo instance
can get stuck waiting for a jwack token, which makes it look like the
whole build has stalled, when really redo-log is just waiting a long
time for a particular subprocess to be able to continue. We'll need to
add a specific workaround for that.
2018-11-03 22:09:18 -04:00
|
|
|
]
|
2018-11-19 11:22:53 -05:00
|
|
|
if color != 1:
|
|
|
|
|
argv.append('--color' if color >= 2 else '--no-color')
|
|
|
|
|
argv.append('-')
|
redo-log: capture and linearize the output of redo builds.
redo now saves the stderr from every .do script, for every target, into
a file in the .redo directory. That means you can look up the logs
from the most recent build of any target using the new redo-log
command, for example:
redo-log -r all
The default is to show logs non-recursively, that is, it'll show when a
target does redo-ifchange on another target, but it won't recurse into
the logs for the latter target. With -r (recursive), it does. With -u
(unchanged), it does even if redo-ifchange discovered that the target
was already up-to-date; in that case, it prints the logs of the *most
recent* time the target was generated.
With --no-details, redo-log will show only the 'redo' lines, not the
other log messages. For very noisy build systems (like recursing into
a 'make' instance) this can be helpful to get an overview of what
happened, without all the cruft.
You can use the -f (follow) option like tail -f, to follow a build
that's currently in progress until it finishes. redo itself spins up a
copy of redo-log -r -f while it runs, so you can see what's going on.
Still broken in this version:
- No man page or new tests yet.
- ANSI colors don't yet work (unless you use --raw-logs, which gives
the old-style behaviour).
- You can't redirect the output of a sub-redo to a file or a
pipe right now, because redo-log is eating it.
- The regex for matching 'redo' lines in the log is very gross.
Instead, we should put the raw log files in a more machine-parseable
format, and redo-log should turn that into human-readable format.
- redo-log tries to "linearize" the logs, which makes them
comprehensible even for a large parallel build. It recursively shows
log messages for each target in depth-first tree order (by tracing
into a new target every time it sees a 'redo' line). This works
really well, but in some specific cases, the "topmost" redo instance
can get stuck waiting for a jwack token, which makes it look like the
whole build has stalled, when really redo-log is just waiting a long
time for a particular subprocess to be able to continue. We'll need to
add a specific workaround for that.
2018-11-03 22:09:18 -04:00
|
|
|
os.execvp(argv[0], argv)
|
|
|
|
|
except Exception, e:
|
|
|
|
|
sys.stderr.write('redo-log: exec: %s\n' % e)
|
|
|
|
|
finally:
|
|
|
|
|
os._exit(99)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
def await_log_reader():
|
2018-11-19 10:55:56 -05:00
|
|
|
if not vars.LOG: return
|
redo-log: capture and linearize the output of redo builds.
redo now saves the stderr from every .do script, for every target, into
a file in the .redo directory. That means you can look up the logs
from the most recent build of any target using the new redo-log
command, for example:
redo-log -r all
The default is to show logs non-recursively, that is, it'll show when a
target does redo-ifchange on another target, but it won't recurse into
the logs for the latter target. With -r (recursive), it does. With -u
(unchanged), it does even if redo-ifchange discovered that the target
was already up-to-date; in that case, it prints the logs of the *most
recent* time the target was generated.
With --no-details, redo-log will show only the 'redo' lines, not the
other log messages. For very noisy build systems (like recursing into
a 'make' instance) this can be helpful to get an overview of what
happened, without all the cruft.
You can use the -f (follow) option like tail -f, to follow a build
that's currently in progress until it finishes. redo itself spins up a
copy of redo-log -r -f while it runs, so you can see what's going on.
Still broken in this version:
- No man page or new tests yet.
- ANSI colors don't yet work (unless you use --raw-logs, which gives
the old-style behaviour).
- You can't redirect the output of a sub-redo to a file or a
pipe right now, because redo-log is eating it.
- The regex for matching 'redo' lines in the log is very gross.
Instead, we should put the raw log files in a more machine-parseable
format, and redo-log should turn that into human-readable format.
- redo-log tries to "linearize" the logs, which makes them
comprehensible even for a large parallel build. It recursively shows
log messages for each target in depth-first tree order (by tracing
into a new target every time it sees a 'redo' line). This works
really well, but in some specific cases, the "topmost" redo instance
can get stuck waiting for a jwack token, which makes it look like the
whole build has stalled, when really redo-log is just waiting a long
time for a particular subprocess to be able to continue. We'll need to
add a specific workaround for that.
2018-11-03 22:09:18 -04:00
|
|
|
global log_reader_pid
|
|
|
|
|
if log_reader_pid > 0:
|
|
|
|
|
# never actually close fd#1 or fd#2; insanity awaits.
|
|
|
|
|
# replace it with something else instead.
|
|
|
|
|
# Since our stdout/stderr are attached to redo-log's stdin,
|
|
|
|
|
# this will notify redo-log that it's time to die (after it finishes
|
|
|
|
|
# reading the logs)
|
|
|
|
|
out = open('/dev/tty', 'w')
|
|
|
|
|
os.dup2(out.fileno(), 1)
|
|
|
|
|
os.dup2(out.fileno(), 2)
|
|
|
|
|
os.waitpid(log_reader_pid, 0)
|
|
|
|
|
|
|
|
|
|
|
2010-12-10 20:53:31 -08:00
|
|
|
class ImmediateReturn(Exception):
|
|
|
|
|
def __init__(self, rv):
|
|
|
|
|
Exception.__init__(self, "immediate return with exit code %d" % rv)
|
|
|
|
|
self.rv = rv
|
|
|
|
|
|
|
|
|
|
|
2010-11-21 23:33:11 -08:00
|
|
|
class BuildJob:
|
2010-12-10 02:58:13 -08:00
|
|
|
def __init__(self, t, sf, lock, shouldbuildfunc, donefunc):
|
|
|
|
|
self.t = t # original target name, not relative to vars.BASE
|
|
|
|
|
self.sf = sf
|
2011-03-22 22:09:56 -07:00
|
|
|
tmpbase = t
|
|
|
|
|
while not os.path.isdir(os.path.dirname(tmpbase) or '.'):
|
|
|
|
|
ofs = tmpbase.rfind('/')
|
|
|
|
|
assert(ofs >= 0)
|
|
|
|
|
tmpbase = tmpbase[:ofs] + '__' + tmpbase[ofs+1:]
|
|
|
|
|
self.tmpname1 = '%s.redo1.tmp' % tmpbase
|
|
|
|
|
self.tmpname2 = '%s.redo2.tmp' % tmpbase
|
2010-11-21 23:33:11 -08:00
|
|
|
self.lock = lock
|
|
|
|
|
self.shouldbuildfunc = shouldbuildfunc
|
|
|
|
|
self.donefunc = donefunc
|
2010-11-22 04:40:54 -08:00
|
|
|
self.before_t = _try_stat(self.t)
|
2010-11-21 23:33:11 -08:00
|
|
|
|
|
|
|
|
def start(self):
|
2010-11-22 00:03:43 -08:00
|
|
|
assert(self.lock.owned)
|
2010-12-10 20:53:31 -08:00
|
|
|
try:
|
2018-10-30 23:03:46 -04:00
|
|
|
try:
|
redo-log: capture and linearize the output of redo builds.
redo now saves the stderr from every .do script, for every target, into
a file in the .redo directory. That means you can look up the logs
from the most recent build of any target using the new redo-log
command, for example:
redo-log -r all
The default is to show logs non-recursively, that is, it'll show when a
target does redo-ifchange on another target, but it won't recurse into
the logs for the latter target. With -r (recursive), it does. With -u
(unchanged), it does even if redo-ifchange discovered that the target
was already up-to-date; in that case, it prints the logs of the *most
recent* time the target was generated.
With --no-details, redo-log will show only the 'redo' lines, not the
other log messages. For very noisy build systems (like recursing into
a 'make' instance) this can be helpful to get an overview of what
happened, without all the cruft.
You can use the -f (follow) option like tail -f, to follow a build
that's currently in progress until it finishes. redo itself spins up a
copy of redo-log -r -f while it runs, so you can see what's going on.
Still broken in this version:
- No man page or new tests yet.
- ANSI colors don't yet work (unless you use --raw-logs, which gives
the old-style behaviour).
- You can't redirect the output of a sub-redo to a file or a
pipe right now, because redo-log is eating it.
- The regex for matching 'redo' lines in the log is very gross.
Instead, we should put the raw log files in a more machine-parseable
format, and redo-log should turn that into human-readable format.
- redo-log tries to "linearize" the logs, which makes them
comprehensible even for a large parallel build. It recursively shows
log messages for each target in depth-first tree order (by tracing
into a new target every time it sees a 'redo' line). This works
really well, but in some specific cases, the "topmost" redo instance
can get stuck waiting for a jwack token, which makes it look like the
whole build has stalled, when really redo-log is just waiting a long
time for a particular subprocess to be able to continue. We'll need to
add a specific workaround for that.
2018-11-03 22:09:18 -04:00
|
|
|
is_target, dirty = self.shouldbuildfunc(self.t)
|
2018-10-30 23:03:46 -04:00
|
|
|
except state.CyclicDependencyError:
|
|
|
|
|
err('cyclic dependency while checking %s\n' % _nice(self.t))
|
|
|
|
|
raise ImmediateReturn(208)
|
The second half of redo-stamp: out-of-order building.
If a depends on b depends on c, and c is dirty but b uses redo-stamp
checksums, then 'redo-ifchange a' is indeterminate: we won't know if we need
to run a.do unless we first build b, but the script that *normally* runs
'redo-ifchange b' is a.do, and we don't want to run that yet, because we
don't know for sure if b is dirty, and we shouldn't build a unless one of
its dependencies is dirty. Eek!
Luckily, there's a safe solution. If we *know* a is dirty - eg. because
a.do or one of its children has definitely changed - then we can just run
a.do immediately and there's no problem, even if b is indeterminate, because
we were going to run a.do anyhow.
If a's dependencies are *not* definitely dirty, and all we have is
indeterminate ones like b, then that means a's build process *hasn't
changed*, which means its tree of dependencies still includes b, which means
we can deduce that if we *did* run a.do, it would end up running b.do.
Since we know that anyhow, we can safely just run b.do, which will either
b.set_checked() or b.set_changed(). Once that's done, we can re-parse a's
dependencies and this time conclusively tell if it needs to be redone or
not. Even if it does, b is already up-to-date, so the 'redo-ifchange b'
line in a.do will be fast.
...now take all the above and do it recursively to handle nested
dependencies, etc, and you're done.
2010-12-11 04:40:05 -08:00
|
|
|
if not dirty:
|
2010-12-10 20:53:31 -08:00
|
|
|
# target doesn't need to be built; skip the whole task
|
Raw logs contain @@REDO lines instead of formatted data.
This makes them more reliable to parse. redo-log can parse each line,
format and print it, then recurse if necessary. This got a little ugly
because I wanted 'redo --raw-logs' to work, which we want to format the
output nicely, but not call redo-log.
(As a result, --raw-logs has a different meaning to redo and
redo-log, which is kinda dumb. I should fix that.)
As an added bonus, redo-log now handles indenting of recursive logs, so
if the build was a -> a/b -> a/b/c, and you look at the log for a/b, it
can still start at the top level indentation.
2018-11-13 04:05:42 -05:00
|
|
|
if is_target:
|
redo-log: add automated tests, and fix some path bugs revealed by them.
When a log for X was saying it wanted to refer to Y, we used a relative
path, but it was sometimes relative to the wrong starting location, so
redo-log couldn't find it later.
Two examples:
- if default.o.do is handling builds for a/b/x.o, and default.o.do
does 'redo a/b/x.h', the log for x.o should refer to ./x.h, not
a/b/x.h.
- if foo.do is handling builds for foo, and it does
"cd a/b && redo x", the log for foo should refer to a/b/x, not just
x.
2018-11-19 17:09:40 -05:00
|
|
|
meta('unchanged', state.target_relpath(self.t))
|
2010-12-10 20:53:31 -08:00
|
|
|
return self._after2(0)
|
|
|
|
|
except ImmediateReturn, e:
|
|
|
|
|
return self._after2(e.rv)
|
The second half of redo-stamp: out-of-order building.
If a depends on b depends on c, and c is dirty but b uses redo-stamp
checksums, then 'redo-ifchange a' is indeterminate: we won't know if we need
to run a.do unless we first build b, but the script that *normally* runs
'redo-ifchange b' is a.do, and we don't want to run that yet, because we
don't know for sure if b is dirty, and we shouldn't build a unless one of
its dependencies is dirty. Eek!
Luckily, there's a safe solution. If we *know* a is dirty - eg. because
a.do or one of its children has definitely changed - then we can just run
a.do immediately and there's no problem, even if b is indeterminate, because
we were going to run a.do anyhow.
If a's dependencies are *not* definitely dirty, and all we have is
indeterminate ones like b, then that means a's build process *hasn't
changed*, which means its tree of dependencies still includes b, which means
we can deduce that if we *did* run a.do, it would end up running b.do.
Since we know that anyhow, we can safely just run b.do, which will either
b.set_checked() or b.set_changed(). Once that's done, we can re-parse a's
dependencies and this time conclusively tell if it needs to be redone or
not. Even if it does, b is already up-to-date, so the 'redo-ifchange b'
line in a.do will be fast.
...now take all the above and do it recursively to handle nested
dependencies, etc, and you're done.
2010-12-11 04:40:05 -08:00
|
|
|
|
2010-12-11 05:50:29 -08:00
|
|
|
if vars.NO_OOB or dirty == True:
|
The second half of redo-stamp: out-of-order building.
If a depends on b depends on c, and c is dirty but b uses redo-stamp
checksums, then 'redo-ifchange a' is indeterminate: we won't know if we need
to run a.do unless we first build b, but the script that *normally* runs
'redo-ifchange b' is a.do, and we don't want to run that yet, because we
don't know for sure if b is dirty, and we shouldn't build a unless one of
its dependencies is dirty. Eek!
Luckily, there's a safe solution. If we *know* a is dirty - eg. because
a.do or one of its children has definitely changed - then we can just run
a.do immediately and there's no problem, even if b is indeterminate, because
we were going to run a.do anyhow.
If a's dependencies are *not* definitely dirty, and all we have is
indeterminate ones like b, then that means a's build process *hasn't
changed*, which means its tree of dependencies still includes b, which means
we can deduce that if we *did* run a.do, it would end up running b.do.
Since we know that anyhow, we can safely just run b.do, which will either
b.set_checked() or b.set_changed(). Once that's done, we can re-parse a's
dependencies and this time conclusively tell if it needs to be redone or
not. Even if it does, b is already up-to-date, so the 'redo-ifchange b'
line in a.do will be fast.
...now take all the above and do it recursively to handle nested
dependencies, etc, and you're done.
2010-12-11 04:40:05 -08:00
|
|
|
self._start_do()
|
|
|
|
|
else:
|
2010-12-19 01:19:52 -08:00
|
|
|
self._start_unlocked(dirty)
|
The second half of redo-stamp: out-of-order building.
If a depends on b depends on c, and c is dirty but b uses redo-stamp
checksums, then 'redo-ifchange a' is indeterminate: we won't know if we need
to run a.do unless we first build b, but the script that *normally* runs
'redo-ifchange b' is a.do, and we don't want to run that yet, because we
don't know for sure if b is dirty, and we shouldn't build a unless one of
its dependencies is dirty. Eek!
Luckily, there's a safe solution. If we *know* a is dirty - eg. because
a.do or one of its children has definitely changed - then we can just run
a.do immediately and there's no problem, even if b is indeterminate, because
we were going to run a.do anyhow.
If a's dependencies are *not* definitely dirty, and all we have is
indeterminate ones like b, then that means a's build process *hasn't
changed*, which means its tree of dependencies still includes b, which means
we can deduce that if we *did* run a.do, it would end up running b.do.
Since we know that anyhow, we can safely just run b.do, which will either
b.set_checked() or b.set_changed(). Once that's done, we can re-parse a's
dependencies and this time conclusively tell if it needs to be redone or
not. Even if it does, b is already up-to-date, so the 'redo-ifchange b'
line in a.do will be fast.
...now take all the above and do it recursively to handle nested
dependencies, etc, and you're done.
2010-12-11 04:40:05 -08:00
|
|
|
|
|
|
|
|
def _start_do(self):
|
|
|
|
|
assert(self.lock.owned)
|
|
|
|
|
t = self.t
|
|
|
|
|
sf = self.sf
|
2010-12-10 22:42:33 -08:00
|
|
|
newstamp = sf.read_stamp()
|
|
|
|
|
if (sf.is_generated and
|
|
|
|
|
newstamp != state.STAMP_MISSING and
|
|
|
|
|
(sf.stamp != newstamp or sf.is_override)):
|
2010-12-19 02:31:40 -08:00
|
|
|
state.warn_override(_nice(t))
|
2018-10-12 05:01:07 -04:00
|
|
|
if not sf.is_override:
|
|
|
|
|
warn('%s - old: %r\n' % (_nice(t), sf.stamp))
|
|
|
|
|
warn('%s - new: %r\n' % (_nice(t), newstamp))
|
|
|
|
|
sf.set_override()
|
2010-12-10 22:42:33 -08:00
|
|
|
sf.set_checked()
|
|
|
|
|
sf.save()
|
|
|
|
|
return self._after2(0)
|
2011-03-05 18:11:20 -08:00
|
|
|
if (os.path.exists(t) and not os.path.isdir(t + '/.')
|
2010-12-10 02:58:13 -08:00
|
|
|
and not sf.is_generated):
|
2010-12-06 03:12:53 -08:00
|
|
|
# an existing source file that was not generated by us.
|
|
|
|
|
# This step is mentioned by djb in his notes.
|
|
|
|
|
# For example, a rule called default.c.do could be used to try
|
|
|
|
|
# to produce hello.c, but we don't want that to happen if
|
|
|
|
|
# hello.c was created by the end user.
|
2010-12-07 02:17:22 -08:00
|
|
|
debug2("-- static (%r)\n" % t)
|
2010-12-10 02:58:13 -08:00
|
|
|
sf.set_static()
|
|
|
|
|
sf.save()
|
2010-11-22 00:03:43 -08:00
|
|
|
return self._after2(0)
|
2010-12-11 22:59:55 -08:00
|
|
|
sf.zap_deps1()
|
2018-10-30 23:23:04 -04:00
|
|
|
(dodir, dofile, basedir, basename, ext) = paths.find_do_file(sf)
|
2010-11-22 00:03:43 -08:00
|
|
|
if not dofile:
|
2010-11-24 02:18:19 -08:00
|
|
|
if os.path.exists(t):
|
2010-12-10 02:58:13 -08:00
|
|
|
sf.set_static()
|
|
|
|
|
sf.save()
|
2010-11-24 02:18:19 -08:00
|
|
|
return self._after2(0)
|
|
|
|
|
else:
|
Raw logs contain @@REDO lines instead of formatted data.
This makes them more reliable to parse. redo-log can parse each line,
format and print it, then recurse if necessary. This got a little ugly
because I wanted 'redo --raw-logs' to work, which we want to format the
output nicely, but not call redo-log.
(As a result, --raw-logs has a different meaning to redo and
redo-log, which is kinda dumb. I should fix that.)
As an added bonus, redo-log now handles indenting of recursive logs, so
if the build was a -> a/b -> a/b/c, and you look at the log for a/b, it
can still start at the top level indentation.
2018-11-13 04:05:42 -05:00
|
|
|
err('no rule to redo %r\n' % t)
|
2010-11-24 02:18:19 -08:00
|
|
|
return self._after2(1)
|
2010-12-11 00:29:04 -08:00
|
|
|
unlink(self.tmpname1)
|
|
|
|
|
unlink(self.tmpname2)
|
|
|
|
|
ffd = os.open(self.tmpname1, os.O_CREAT|os.O_RDWR|os.O_EXCL, 0666)
|
2010-11-22 03:21:17 -08:00
|
|
|
close_on_exec(ffd, True)
|
2010-11-22 01:48:02 -08:00
|
|
|
self.f = os.fdopen(ffd, 'w+')
|
2010-11-22 00:03:43 -08:00
|
|
|
# this will run in the dofile's directory, so use only basenames here
|
2016-11-27 12:29:43 -08:00
|
|
|
arg1 = basename + ext # target name (including extension)
|
|
|
|
|
arg2 = basename # target name (without extension)
|
2010-11-22 00:03:43 -08:00
|
|
|
argv = ['sh', '-e',
|
2010-12-19 05:47:38 -08:00
|
|
|
dofile,
|
2011-12-31 02:45:38 -05:00
|
|
|
arg1,
|
|
|
|
|
arg2,
|
2011-03-22 22:09:56 -07:00
|
|
|
# temp output file name
|
|
|
|
|
state.relpath(os.path.abspath(self.tmpname2), dodir),
|
2010-11-22 00:03:43 -08:00
|
|
|
]
|
|
|
|
|
if vars.VERBOSE: argv[1] += 'v'
|
|
|
|
|
if vars.XTRACE: argv[1] += 'x'
|
2011-01-15 15:56:43 -08:00
|
|
|
firstline = open(os.path.join(dodir, dofile)).readline().strip()
|
2011-01-01 22:00:14 -08:00
|
|
|
if firstline.startswith('#!/'):
|
|
|
|
|
argv[0:2] = firstline[2:].split(' ')
|
redo-log: capture and linearize the output of redo builds.
redo now saves the stderr from every .do script, for every target, into
a file in the .redo directory. That means you can look up the logs
from the most recent build of any target using the new redo-log
command, for example:
redo-log -r all
The default is to show logs non-recursively, that is, it'll show when a
target does redo-ifchange on another target, but it won't recurse into
the logs for the latter target. With -r (recursive), it does. With -u
(unchanged), it does even if redo-ifchange discovered that the target
was already up-to-date; in that case, it prints the logs of the *most
recent* time the target was generated.
With --no-details, redo-log will show only the 'redo' lines, not the
other log messages. For very noisy build systems (like recursing into
a 'make' instance) this can be helpful to get an overview of what
happened, without all the cruft.
You can use the -f (follow) option like tail -f, to follow a build
that's currently in progress until it finishes. redo itself spins up a
copy of redo-log -r -f while it runs, so you can see what's going on.
Still broken in this version:
- No man page or new tests yet.
- ANSI colors don't yet work (unless you use --raw-logs, which gives
the old-style behaviour).
- You can't redirect the output of a sub-redo to a file or a
pipe right now, because redo-log is eating it.
- The regex for matching 'redo' lines in the log is very gross.
Instead, we should put the raw log files in a more machine-parseable
format, and redo-log should turn that into human-readable format.
- redo-log tries to "linearize" the logs, which makes them
comprehensible even for a large parallel build. It recursively shows
log messages for each target in depth-first tree order (by tracing
into a new target every time it sees a 'redo' line). This works
really well, but in some specific cases, the "topmost" redo instance
can get stuck waiting for a jwack token, which makes it look like the
whole build has stalled, when really redo-log is just waiting a long
time for a particular subprocess to be able to continue. We'll need to
add a specific workaround for that.
2018-11-03 22:09:18 -04:00
|
|
|
# make sure to create the logfile *before* writing the log about it.
|
|
|
|
|
# that way redo-log won't trace into an obsolete logfile.
|
2018-11-19 10:55:56 -05:00
|
|
|
if vars.LOG: open(state.logname(self.sf.id), 'w')
|
redo-log: add automated tests, and fix some path bugs revealed by them.
When a log for X was saying it wanted to refer to Y, we used a relative
path, but it was sometimes relative to the wrong starting location, so
redo-log couldn't find it later.
Two examples:
- if default.o.do is handling builds for a/b/x.o, and default.o.do
does 'redo a/b/x.h', the log for x.o should refer to ./x.h, not
a/b/x.h.
- if foo.do is handling builds for foo, and it does
"cd a/b && redo x", the log for foo should refer to a/b/x, not just
x.
2018-11-19 17:09:40 -05:00
|
|
|
meta('do', state.target_relpath(t))
|
2010-12-19 05:47:38 -08:00
|
|
|
self.dodir = dodir
|
|
|
|
|
self.basename = basename
|
|
|
|
|
self.ext = ext
|
2010-11-22 00:03:43 -08:00
|
|
|
self.argv = argv
|
2010-12-10 02:58:13 -08:00
|
|
|
sf.is_generated = True
|
|
|
|
|
sf.save()
|
2011-01-17 23:53:35 -08:00
|
|
|
dof = state.File(name=os.path.join(dodir, dofile))
|
2010-12-07 02:17:22 -08:00
|
|
|
dof.set_static()
|
|
|
|
|
dof.save()
|
2010-12-09 02:44:33 -08:00
|
|
|
state.commit()
|
2010-11-22 00:03:43 -08:00
|
|
|
jwack.start_job(t, self._do_subproc, self._after)
|
|
|
|
|
|
2010-12-19 01:19:52 -08:00
|
|
|
def _start_unlocked(self, dirty):
|
The second half of redo-stamp: out-of-order building.
If a depends on b depends on c, and c is dirty but b uses redo-stamp
checksums, then 'redo-ifchange a' is indeterminate: we won't know if we need
to run a.do unless we first build b, but the script that *normally* runs
'redo-ifchange b' is a.do, and we don't want to run that yet, because we
don't know for sure if b is dirty, and we shouldn't build a unless one of
its dependencies is dirty. Eek!
Luckily, there's a safe solution. If we *know* a is dirty - eg. because
a.do or one of its children has definitely changed - then we can just run
a.do immediately and there's no problem, even if b is indeterminate, because
we were going to run a.do anyhow.
If a's dependencies are *not* definitely dirty, and all we have is
indeterminate ones like b, then that means a's build process *hasn't
changed*, which means its tree of dependencies still includes b, which means
we can deduce that if we *did* run a.do, it would end up running b.do.
Since we know that anyhow, we can safely just run b.do, which will either
b.set_checked() or b.set_changed(). Once that's done, we can re-parse a's
dependencies and this time conclusively tell if it needs to be redone or
not. Even if it does, b is already up-to-date, so the 'redo-ifchange b'
line in a.do will be fast.
...now take all the above and do it recursively to handle nested
dependencies, etc, and you're done.
2010-12-11 04:40:05 -08:00
|
|
|
# out-of-band redo of some sub-objects. This happens when we're not
|
2010-12-19 01:19:52 -08:00
|
|
|
# quite sure if t needs to be built or not (because some children
|
|
|
|
|
# look dirty, but might turn out to be clean thanks to checksums).
|
|
|
|
|
# We have to call redo-unlocked to figure it all out.
|
The second half of redo-stamp: out-of-order building.
If a depends on b depends on c, and c is dirty but b uses redo-stamp
checksums, then 'redo-ifchange a' is indeterminate: we won't know if we need
to run a.do unless we first build b, but the script that *normally* runs
'redo-ifchange b' is a.do, and we don't want to run that yet, because we
don't know for sure if b is dirty, and we shouldn't build a unless one of
its dependencies is dirty. Eek!
Luckily, there's a safe solution. If we *know* a is dirty - eg. because
a.do or one of its children has definitely changed - then we can just run
a.do immediately and there's no problem, even if b is indeterminate, because
we were going to run a.do anyhow.
If a's dependencies are *not* definitely dirty, and all we have is
indeterminate ones like b, then that means a's build process *hasn't
changed*, which means its tree of dependencies still includes b, which means
we can deduce that if we *did* run a.do, it would end up running b.do.
Since we know that anyhow, we can safely just run b.do, which will either
b.set_checked() or b.set_changed(). Once that's done, we can re-parse a's
dependencies and this time conclusively tell if it needs to be redone or
not. Even if it does, b is already up-to-date, so the 'redo-ifchange b'
line in a.do will be fast.
...now take all the above and do it recursively to handle nested
dependencies, etc, and you're done.
2010-12-11 04:40:05 -08:00
|
|
|
#
|
2010-12-19 01:19:52 -08:00
|
|
|
# Note: redo-unlocked will handle all the updating of sf, so we
|
|
|
|
|
# don't have to do it here, nor call _after1. However, we have to
|
|
|
|
|
# hold onto the lock because otherwise we would introduce a race
|
|
|
|
|
# condition; that's why it's called redo-unlocked, because it doesn't
|
|
|
|
|
# grab a lock.
|
redo-log: add automated tests, and fix some path bugs revealed by them.
When a log for X was saying it wanted to refer to Y, we used a relative
path, but it was sometimes relative to the wrong starting location, so
redo-log couldn't find it later.
Two examples:
- if default.o.do is handling builds for a/b/x.o, and default.o.do
does 'redo a/b/x.h', the log for x.o should refer to ./x.h, not
a/b/x.h.
- if foo.do is handling builds for foo, and it does
"cd a/b && redo x", the log for foo should refer to a/b/x, not just
x.
2018-11-19 17:09:40 -05:00
|
|
|
here = os.getcwd()
|
|
|
|
|
def _fix(p):
|
|
|
|
|
return state.relpath(os.path.join(vars.BASE, p), here)
|
|
|
|
|
argv = (['redo-unlocked', _fix(self.sf.name)] +
|
|
|
|
|
[_fix(d.name) for d in dirty])
|
|
|
|
|
meta('check', state.target_relpath(self.t))
|
The second half of redo-stamp: out-of-order building.
If a depends on b depends on c, and c is dirty but b uses redo-stamp
checksums, then 'redo-ifchange a' is indeterminate: we won't know if we need
to run a.do unless we first build b, but the script that *normally* runs
'redo-ifchange b' is a.do, and we don't want to run that yet, because we
don't know for sure if b is dirty, and we shouldn't build a unless one of
its dependencies is dirty. Eek!
Luckily, there's a safe solution. If we *know* a is dirty - eg. because
a.do or one of its children has definitely changed - then we can just run
a.do immediately and there's no problem, even if b is indeterminate, because
we were going to run a.do anyhow.
If a's dependencies are *not* definitely dirty, and all we have is
indeterminate ones like b, then that means a's build process *hasn't
changed*, which means its tree of dependencies still includes b, which means
we can deduce that if we *did* run a.do, it would end up running b.do.
Since we know that anyhow, we can safely just run b.do, which will either
b.set_checked() or b.set_changed(). Once that's done, we can re-parse a's
dependencies and this time conclusively tell if it needs to be redone or
not. Even if it does, b is already up-to-date, so the 'redo-ifchange b'
line in a.do will be fast.
...now take all the above and do it recursively to handle nested
dependencies, etc, and you're done.
2010-12-11 04:40:05 -08:00
|
|
|
state.commit()
|
|
|
|
|
def run():
|
|
|
|
|
os.environ['REDO_DEPTH'] = vars.DEPTH + ' '
|
2013-06-28 01:48:41 +00:00
|
|
|
signal.signal(signal.SIGPIPE, signal.SIG_DFL) # python ignores SIGPIPE
|
The second half of redo-stamp: out-of-order building.
If a depends on b depends on c, and c is dirty but b uses redo-stamp
checksums, then 'redo-ifchange a' is indeterminate: we won't know if we need
to run a.do unless we first build b, but the script that *normally* runs
'redo-ifchange b' is a.do, and we don't want to run that yet, because we
don't know for sure if b is dirty, and we shouldn't build a unless one of
its dependencies is dirty. Eek!
Luckily, there's a safe solution. If we *know* a is dirty - eg. because
a.do or one of its children has definitely changed - then we can just run
a.do immediately and there's no problem, even if b is indeterminate, because
we were going to run a.do anyhow.
If a's dependencies are *not* definitely dirty, and all we have is
indeterminate ones like b, then that means a's build process *hasn't
changed*, which means its tree of dependencies still includes b, which means
we can deduce that if we *did* run a.do, it would end up running b.do.
Since we know that anyhow, we can safely just run b.do, which will either
b.set_checked() or b.set_changed(). Once that's done, we can re-parse a's
dependencies and this time conclusively tell if it needs to be redone or
not. Even if it does, b is already up-to-date, so the 'redo-ifchange b'
line in a.do will be fast.
...now take all the above and do it recursively to handle nested
dependencies, etc, and you're done.
2010-12-11 04:40:05 -08:00
|
|
|
os.execvp(argv[0], argv)
|
|
|
|
|
assert(0)
|
|
|
|
|
# returns only if there's an exception
|
|
|
|
|
def after(t, rv):
|
|
|
|
|
return self._after2(rv)
|
|
|
|
|
jwack.start_job(self.t, run, after)
|
|
|
|
|
|
2010-11-22 00:03:43 -08:00
|
|
|
def _do_subproc(self):
|
2010-11-25 06:35:22 -08:00
|
|
|
# careful: REDO_PWD was the PWD relative to the STARTPATH at the time
|
|
|
|
|
# we *started* building the current target; but that target ran
|
|
|
|
|
# redo-ifchange, and it might have done it from a different directory
|
2010-12-07 02:17:22 -08:00
|
|
|
# than we started it in. So os.getcwd() might be != REDO_PWD right
|
|
|
|
|
# now.
|
2018-10-06 04:36:24 -04:00
|
|
|
assert(state.is_flushed())
|
2010-12-19 05:47:38 -08:00
|
|
|
dn = self.dodir
|
2010-11-25 06:35:22 -08:00
|
|
|
newp = os.path.realpath(dn)
|
|
|
|
|
os.environ['REDO_PWD'] = state.relpath(newp, vars.STARTDIR)
|
2010-12-19 05:47:38 -08:00
|
|
|
os.environ['REDO_TARGET'] = self.basename + self.ext
|
2010-11-22 00:12:56 -08:00
|
|
|
os.environ['REDO_DEPTH'] = vars.DEPTH + ' '
|
2016-11-27 23:35:28 -08:00
|
|
|
vars.add_lock(str(self.lock.fid))
|
2010-11-22 00:12:56 -08:00
|
|
|
if dn:
|
|
|
|
|
os.chdir(dn)
|
|
|
|
|
os.dup2(self.f.fileno(), 1)
|
|
|
|
|
os.close(self.f.fileno())
|
2010-11-22 03:21:17 -08:00
|
|
|
close_on_exec(1, False)
|
2018-11-19 10:55:56 -05:00
|
|
|
if vars.LOG:
|
redo-log: fix stdout vs stderr; don't recapture if .do script redirects stderr.
redo-log should log to stdout, because when you ask for the specific
logs from a run, the logs are the output you requested. redo-log's
stderr should be about any errors retrieving that output.
On the other hand, when you run redo, the logs are literally the stderr
of the build steps, which are incidental to the main job (building
things). So that should be send to stderr. Previously, we were
sending to stderr when --no-log, but stdout when --log, which is
totally wrong.
Also, adding redo-log had the unexpected result that if a .do script
redirected the stderr of a sub-redo or redo-ifchange to a file or pipe,
the output would be eaten by redo-log instead of the intended output.
So a test runner like this:
self.test:
redo self.runtest 2>&1 | grep ERROR
would not work; the self.runtest output would be sent to redo's log
buffer (and from there, probably printed to the toplevel redo's stderr)
rather than passed along to grep.
2018-11-19 16:27:41 -05:00
|
|
|
cur_inode = str(os.fstat(2).st_ino)
|
|
|
|
|
if not vars.LOG_INODE or cur_inode == vars.LOG_INODE:
|
|
|
|
|
# .do script has *not* redirected stderr, which means we're
|
|
|
|
|
# using redo-log's log saving mode. That means subprocs
|
|
|
|
|
# should be logged to their own file. If the .do script
|
|
|
|
|
# *does* redirect stderr, that redirection should be inherited
|
|
|
|
|
# by subprocs, so we'd do nothing.
|
|
|
|
|
logf = open(state.logname(self.sf.id), 'w')
|
|
|
|
|
new_inode = str(os.fstat(logf.fileno()).st_ino)
|
redo-log: add automated tests, and fix some path bugs revealed by them.
When a log for X was saying it wanted to refer to Y, we used a relative
path, but it was sometimes relative to the wrong starting location, so
redo-log couldn't find it later.
Two examples:
- if default.o.do is handling builds for a/b/x.o, and default.o.do
does 'redo a/b/x.h', the log for x.o should refer to ./x.h, not
a/b/x.h.
- if foo.do is handling builds for foo, and it does
"cd a/b && redo x", the log for foo should refer to a/b/x, not just
x.
2018-11-19 17:09:40 -05:00
|
|
|
os.environ['REDO_LOG'] = '1' # .do files can check this
|
redo-log: fix stdout vs stderr; don't recapture if .do script redirects stderr.
redo-log should log to stdout, because when you ask for the specific
logs from a run, the logs are the output you requested. redo-log's
stderr should be about any errors retrieving that output.
On the other hand, when you run redo, the logs are literally the stderr
of the build steps, which are incidental to the main job (building
things). So that should be send to stderr. Previously, we were
sending to stderr when --no-log, but stdout when --log, which is
totally wrong.
Also, adding redo-log had the unexpected result that if a .do script
redirected the stderr of a sub-redo or redo-ifchange to a file or pipe,
the output would be eaten by redo-log instead of the intended output.
So a test runner like this:
self.test:
redo self.runtest 2>&1 | grep ERROR
would not work; the self.runtest output would be sent to redo's log
buffer (and from there, probably printed to the toplevel redo's stderr)
rather than passed along to grep.
2018-11-19 16:27:41 -05:00
|
|
|
os.environ['REDO_LOG_INODE'] = new_inode
|
|
|
|
|
os.dup2(logf.fileno(), 2)
|
|
|
|
|
close_on_exec(2, False)
|
|
|
|
|
logf.close()
|
|
|
|
|
else:
|
|
|
|
|
if 'REDO_LOG_INODE' in os.environ:
|
|
|
|
|
del os.environ['REDO_LOG_INODE']
|
redo-log: add automated tests, and fix some path bugs revealed by them.
When a log for X was saying it wanted to refer to Y, we used a relative
path, but it was sometimes relative to the wrong starting location, so
redo-log couldn't find it later.
Two examples:
- if default.o.do is handling builds for a/b/x.o, and default.o.do
does 'redo a/b/x.h', the log for x.o should refer to ./x.h, not
a/b/x.h.
- if foo.do is handling builds for foo, and it does
"cd a/b && redo x", the log for foo should refer to a/b/x, not just
x.
2018-11-19 17:09:40 -05:00
|
|
|
os.environ['REDO_LOG'] = ''
|
2013-06-28 01:48:41 +00:00
|
|
|
signal.signal(signal.SIGPIPE, signal.SIG_DFL) # python ignores SIGPIPE
|
Raw logs contain @@REDO lines instead of formatted data.
This makes them more reliable to parse. redo-log can parse each line,
format and print it, then recurse if necessary. This got a little ugly
because I wanted 'redo --raw-logs' to work, which we want to format the
output nicely, but not call redo-log.
(As a result, --raw-logs has a different meaning to redo and
redo-log, which is kinda dumb. I should fix that.)
As an added bonus, redo-log now handles indenting of recursive logs, so
if the build was a -> a/b -> a/b/c, and you look at the log for a/b, it
can still start at the top level indentation.
2018-11-13 04:05:42 -05:00
|
|
|
if vars.VERBOSE or vars.XTRACE:
|
redo-log: prioritize the "foreground" process.
When running a parallel build, redo-log -f (which is auto-started by
redo) tries to traverse through the logs depth first, in the order
parent processes started subprocesses. This works pretty well, but if
its dependencies are locked, a process might have to give up its
jobserver token while other stuff builds its dependencies. After the
dependency finishes, the parent might not be able to get a token for
quite some time, and the logs will appear to stop.
To prevent this from happening, we can instantiate up to one "cheater"
token, only in the foreground process (the one locked by redo-log -f),
which will allow it to continue running, albeit a bit slowly (since it
only has one token out of possibly many). When the process finishes,
we then destroy the fake token. It gets a little complicated; see
explanation at the top of jwack.py.
2018-11-17 04:32:09 -05:00
|
|
|
logs.write('* %s' % ' '.join(self.argv).replace('\n', ' '))
|
2010-11-22 00:12:56 -08:00
|
|
|
os.execvp(self.argv[0], self.argv)
|
Raw logs contain @@REDO lines instead of formatted data.
This makes them more reliable to parse. redo-log can parse each line,
format and print it, then recurse if necessary. This got a little ugly
because I wanted 'redo --raw-logs' to work, which we want to format the
output nicely, but not call redo-log.
(As a result, --raw-logs has a different meaning to redo and
redo-log, which is kinda dumb. I should fix that.)
As an added bonus, redo-log now handles indenting of recursive logs, so
if the build was a -> a/b -> a/b/c, and you look at the log for a/b, it
can still start at the top level indentation.
2018-11-13 04:05:42 -05:00
|
|
|
# FIXME: it would be nice to log the exit code to logf.
|
|
|
|
|
# But that would have to happen in the parent process, which doesn't
|
|
|
|
|
# have logf open.
|
2010-11-22 01:48:02 -08:00
|
|
|
assert(0)
|
2010-11-22 00:12:56 -08:00
|
|
|
# returns only if there's an exception
|
2010-11-22 00:03:43 -08:00
|
|
|
|
|
|
|
|
def _after(self, t, rv):
|
2010-11-22 03:21:17 -08:00
|
|
|
try:
|
2010-12-09 03:01:26 -08:00
|
|
|
state.check_sane()
|
2010-11-22 04:40:54 -08:00
|
|
|
rv = self._after1(t, rv)
|
2010-12-09 05:53:30 -08:00
|
|
|
state.commit()
|
2010-11-22 03:21:17 -08:00
|
|
|
finally:
|
|
|
|
|
self._after2(rv)
|
|
|
|
|
|
|
|
|
|
def _after1(self, t, rv):
|
2010-11-22 00:03:43 -08:00
|
|
|
f = self.f
|
2010-11-22 04:40:54 -08:00
|
|
|
before_t = self.before_t
|
|
|
|
|
after_t = _try_stat(t)
|
2010-12-11 00:29:04 -08:00
|
|
|
st1 = os.fstat(f.fileno())
|
|
|
|
|
st2 = _try_stat(self.tmpname2)
|
2011-03-10 14:17:35 -08:00
|
|
|
if (after_t and
|
2016-11-27 10:31:19 -08:00
|
|
|
(not before_t or before_t.st_mtime != after_t.st_mtime) and
|
2010-12-21 04:19:50 -08:00
|
|
|
not stat.S_ISDIR(after_t.st_mode)):
|
2010-12-11 00:29:04 -08:00
|
|
|
err('%s modified %s directly!\n' % (self.argv[2], t))
|
|
|
|
|
err('...you should update $3 (a temp file) or stdout, not $1.\n')
|
2010-11-22 04:40:54 -08:00
|
|
|
rv = 206
|
2010-12-11 00:29:04 -08:00
|
|
|
elif st2 and st1.st_size > 0:
|
|
|
|
|
err('%s wrote to stdout *and* created $3.\n' % self.argv[2])
|
2010-11-22 04:40:54 -08:00
|
|
|
err('...you should write status messages to stderr, not stdout.\n')
|
|
|
|
|
rv = 207
|
2010-11-22 00:03:43 -08:00
|
|
|
if rv==0:
|
2010-12-11 00:29:04 -08:00
|
|
|
if st2:
|
2018-10-06 02:38:32 -04:00
|
|
|
try:
|
|
|
|
|
os.rename(self.tmpname2, t)
|
|
|
|
|
except OSError, e:
|
|
|
|
|
dnt = os.path.dirname(t)
|
|
|
|
|
if not os.path.exists(dnt):
|
|
|
|
|
err('%s: target dir %r does not exist!\n' % (t, dnt))
|
|
|
|
|
else:
|
|
|
|
|
err('%s: rename %s: %s\n' % (t, self.tmpname2, e))
|
|
|
|
|
raise
|
2010-12-11 00:29:04 -08:00
|
|
|
os.unlink(self.tmpname1)
|
|
|
|
|
elif st1.st_size > 0:
|
2010-12-11 21:10:57 -08:00
|
|
|
try:
|
|
|
|
|
os.rename(self.tmpname1, t)
|
|
|
|
|
except OSError, e:
|
|
|
|
|
if e.errno == errno.ENOENT:
|
|
|
|
|
unlink(t)
|
|
|
|
|
else:
|
2014-05-01 21:23:22 -04:00
|
|
|
err('%s: can\'t save stdout to %r: %s\n' %
|
|
|
|
|
(self.argv[2], t, e.strerror))
|
|
|
|
|
rv = 1000
|
2010-12-11 00:29:04 -08:00
|
|
|
if st2:
|
|
|
|
|
os.unlink(self.tmpname2)
|
|
|
|
|
else: # no output generated at all; that's ok
|
|
|
|
|
unlink(self.tmpname1)
|
|
|
|
|
unlink(t)
|
2010-12-10 02:58:13 -08:00
|
|
|
sf = self.sf
|
2010-12-11 02:17:51 -08:00
|
|
|
sf.refresh()
|
2010-12-10 22:42:33 -08:00
|
|
|
sf.is_generated = True
|
2010-12-11 02:17:51 -08:00
|
|
|
sf.is_override = False
|
|
|
|
|
if sf.is_checked() or sf.is_changed():
|
|
|
|
|
# it got checked during the run; someone ran redo-stamp.
|
|
|
|
|
# update_stamp would call set_changed(); we don't want that
|
|
|
|
|
sf.stamp = sf.read_stamp()
|
|
|
|
|
else:
|
|
|
|
|
sf.csum = None
|
|
|
|
|
sf.update_stamp()
|
|
|
|
|
sf.set_changed()
|
2010-11-22 00:03:43 -08:00
|
|
|
else:
|
2010-12-11 00:29:04 -08:00
|
|
|
unlink(self.tmpname1)
|
|
|
|
|
unlink(self.tmpname2)
|
2010-12-10 02:58:13 -08:00
|
|
|
sf = self.sf
|
2010-12-10 20:53:31 -08:00
|
|
|
sf.set_failed()
|
2010-12-11 22:59:55 -08:00
|
|
|
sf.zap_deps2()
|
|
|
|
|
sf.save()
|
2010-11-22 00:03:43 -08:00
|
|
|
f.close()
|
redo-log: add automated tests, and fix some path bugs revealed by them.
When a log for X was saying it wanted to refer to Y, we used a relative
path, but it was sometimes relative to the wrong starting location, so
redo-log couldn't find it later.
Two examples:
- if default.o.do is handling builds for a/b/x.o, and default.o.do
does 'redo a/b/x.h', the log for x.o should refer to ./x.h, not
a/b/x.h.
- if foo.do is handling builds for foo, and it does
"cd a/b && redo x", the log for foo should refer to a/b/x, not just
x.
2018-11-19 17:09:40 -05:00
|
|
|
meta('done', '%d %s' % (rv, state.target_relpath(self.t)))
|
2010-11-22 04:40:54 -08:00
|
|
|
return rv
|
2010-11-21 23:33:11 -08:00
|
|
|
|
2010-11-22 00:03:43 -08:00
|
|
|
def _after2(self, rv):
|
2010-11-22 03:21:17 -08:00
|
|
|
try:
|
|
|
|
|
self.donefunc(self.t, rv)
|
|
|
|
|
assert(self.lock.owned)
|
|
|
|
|
finally:
|
|
|
|
|
self.lock.unlock()
|
2010-11-21 23:33:11 -08:00
|
|
|
|
|
|
|
|
|
|
|
|
|
def main(targets, shouldbuildfunc):
|
2010-11-19 07:21:09 -08:00
|
|
|
retcode = [0] # a list so that it can be reassigned from done()
|
|
|
|
|
if vars.SHUFFLE:
|
2010-12-09 04:54:40 -08:00
|
|
|
import random
|
2010-11-19 07:21:09 -08:00
|
|
|
random.shuffle(targets)
|
|
|
|
|
|
|
|
|
|
locked = []
|
|
|
|
|
|
|
|
|
|
def done(t, rv):
|
|
|
|
|
if rv:
|
|
|
|
|
retcode[0] = 1
|
redo-log: prioritize the "foreground" process.
When running a parallel build, redo-log -f (which is auto-started by
redo) tries to traverse through the logs depth first, in the order
parent processes started subprocesses. This works pretty well, but if
its dependencies are locked, a process might have to give up its
jobserver token while other stuff builds its dependencies. After the
dependency finishes, the parent might not be able to get a token for
quite some time, and the logs will appear to stop.
To prevent this from happening, we can instantiate up to one "cheater"
token, only in the foreground process (the one locked by redo-log -f),
which will allow it to continue running, albeit a bit slowly (since it
only has one token out of possibly many). When the process finishes,
we then destroy the fake token. It gets a little complicated; see
explanation at the top of jwack.py.
2018-11-17 04:32:09 -05:00
|
|
|
|
|
|
|
|
if vars.TARGET and not vars.UNLOCKED:
|
|
|
|
|
me = os.path.join(vars.STARTDIR,
|
|
|
|
|
os.path.join(vars.PWD, vars.TARGET))
|
|
|
|
|
myfile = state.File(name=me)
|
|
|
|
|
selflock = state.Lock(state.LOG_LOCK_MAGIC + myfile.id)
|
|
|
|
|
else:
|
|
|
|
|
selflock = myfile = me = None
|
|
|
|
|
|
|
|
|
|
def cheat():
|
|
|
|
|
if not selflock: return 0
|
|
|
|
|
selflock.trylock()
|
|
|
|
|
if not selflock.owned:
|
|
|
|
|
# redo-log already owns it: let's cheat.
|
|
|
|
|
# Give ourselves one extra token so that the "foreground" log
|
|
|
|
|
# can always make progress.
|
|
|
|
|
return 1
|
|
|
|
|
else:
|
|
|
|
|
# redo-log isn't watching us (yet)
|
|
|
|
|
selflock.unlock()
|
|
|
|
|
return 0
|
2010-11-19 07:21:09 -08:00
|
|
|
|
2010-11-21 22:46:20 -08:00
|
|
|
# In the first cycle, we just build as much as we can without worrying
|
|
|
|
|
# about any lock contention. If someone else has it locked, we move on.
|
2010-12-14 02:19:08 -08:00
|
|
|
seen = {}
|
2011-02-21 03:55:18 -08:00
|
|
|
lock = None
|
2010-11-19 07:21:09 -08:00
|
|
|
for t in targets:
|
2018-11-03 03:36:13 -04:00
|
|
|
if not t:
|
|
|
|
|
err('cannot build the empty target ("").\n')
|
|
|
|
|
retcode[0] = 204
|
|
|
|
|
break
|
2018-10-06 04:36:24 -04:00
|
|
|
assert(state.is_flushed())
|
2010-12-14 02:19:08 -08:00
|
|
|
if t in seen:
|
|
|
|
|
continue
|
|
|
|
|
seen[t] = 1
|
2010-12-09 05:53:30 -08:00
|
|
|
if not jwack.has_token():
|
|
|
|
|
state.commit()
|
redo-log: prioritize the "foreground" process.
When running a parallel build, redo-log -f (which is auto-started by
redo) tries to traverse through the logs depth first, in the order
parent processes started subprocesses. This works pretty well, but if
its dependencies are locked, a process might have to give up its
jobserver token while other stuff builds its dependencies. After the
dependency finishes, the parent might not be able to get a token for
quite some time, and the logs will appear to stop.
To prevent this from happening, we can instantiate up to one "cheater"
token, only in the foreground process (the one locked by redo-log -f),
which will allow it to continue running, albeit a bit slowly (since it
only has one token out of possibly many). When the process finishes,
we then destroy the fake token. It gets a little complicated; see
explanation at the top of jwack.py.
2018-11-17 04:32:09 -05:00
|
|
|
jwack.ensure_token_or_cheat(t, cheat)
|
2010-11-21 07:10:48 -08:00
|
|
|
if retcode[0] and not vars.KEEP_GOING:
|
|
|
|
|
break
|
2010-12-09 03:01:26 -08:00
|
|
|
if not state.check_sane():
|
|
|
|
|
err('.redo directory disappeared; cannot continue.\n')
|
2010-11-22 03:34:37 -08:00
|
|
|
retcode[0] = 205
|
|
|
|
|
break
|
2010-12-10 02:58:13 -08:00
|
|
|
f = state.File(name=t)
|
|
|
|
|
lock = state.Lock(f.id)
|
The second half of redo-stamp: out-of-order building.
If a depends on b depends on c, and c is dirty but b uses redo-stamp
checksums, then 'redo-ifchange a' is indeterminate: we won't know if we need
to run a.do unless we first build b, but the script that *normally* runs
'redo-ifchange b' is a.do, and we don't want to run that yet, because we
don't know for sure if b is dirty, and we shouldn't build a unless one of
its dependencies is dirty. Eek!
Luckily, there's a safe solution. If we *know* a is dirty - eg. because
a.do or one of its children has definitely changed - then we can just run
a.do immediately and there's no problem, even if b is indeterminate, because
we were going to run a.do anyhow.
If a's dependencies are *not* definitely dirty, and all we have is
indeterminate ones like b, then that means a's build process *hasn't
changed*, which means its tree of dependencies still includes b, which means
we can deduce that if we *did* run a.do, it would end up running b.do.
Since we know that anyhow, we can safely just run b.do, which will either
b.set_checked() or b.set_changed(). Once that's done, we can re-parse a's
dependencies and this time conclusively tell if it needs to be redone or
not. Even if it does, b is already up-to-date, so the 'redo-ifchange b'
line in a.do will be fast.
...now take all the above and do it recursively to handle nested
dependencies, etc, and you're done.
2010-12-11 04:40:05 -08:00
|
|
|
if vars.UNLOCKED:
|
|
|
|
|
lock.owned = True
|
|
|
|
|
else:
|
|
|
|
|
lock.trylock()
|
2010-11-19 07:21:09 -08:00
|
|
|
if not lock.owned:
|
redo-log: add automated tests, and fix some path bugs revealed by them.
When a log for X was saying it wanted to refer to Y, we used a relative
path, but it was sometimes relative to the wrong starting location, so
redo-log couldn't find it later.
Two examples:
- if default.o.do is handling builds for a/b/x.o, and default.o.do
does 'redo a/b/x.h', the log for x.o should refer to ./x.h, not
a/b/x.h.
- if foo.do is handling builds for foo, and it does
"cd a/b && redo x", the log for foo should refer to a/b/x, not just
x.
2018-11-19 17:09:40 -05:00
|
|
|
meta('locked', state.target_relpath(t))
|
|
|
|
|
locked.append((f.id,t,f.name))
|
2010-11-19 07:21:09 -08:00
|
|
|
else:
|
2018-10-13 01:30:42 -04:00
|
|
|
# We had to create f before we had a lock, because we need f.id
|
|
|
|
|
# to make the lock. But someone may have updated the state
|
|
|
|
|
# between then and now.
|
|
|
|
|
# FIXME: separate obtaining the fid from creating the File.
|
|
|
|
|
# FIXME: maybe integrate locking into the File object?
|
|
|
|
|
f.refresh()
|
2010-12-10 02:58:13 -08:00
|
|
|
BuildJob(t, f, lock, shouldbuildfunc, done).start()
|
2018-10-06 04:36:24 -04:00
|
|
|
state.commit()
|
|
|
|
|
assert(state.is_flushed())
|
2015-01-15 16:13:35 -05:00
|
|
|
lock = None
|
2010-11-21 22:46:20 -08:00
|
|
|
|
2010-12-14 02:19:08 -08:00
|
|
|
del lock
|
|
|
|
|
|
2010-11-21 22:46:20 -08:00
|
|
|
# Now we've built all the "easy" ones. Go back and just wait on the
|
2010-12-10 04:31:22 -08:00
|
|
|
# remaining ones one by one. There's no reason to do it any more
|
|
|
|
|
# efficiently, because if these targets were previously locked, that
|
|
|
|
|
# means someone else was building them; thus, we probably won't need to
|
|
|
|
|
# do anything. The only exception is if we're invoked as redo instead
|
|
|
|
|
# of redo-ifchange; then we have to redo it even if someone else already
|
|
|
|
|
# did. But that should be rare.
|
2010-11-19 07:21:09 -08:00
|
|
|
while locked or jwack.running():
|
2010-12-09 05:53:30 -08:00
|
|
|
state.commit()
|
2010-11-19 07:21:09 -08:00
|
|
|
jwack.wait_all()
|
redo-log: prioritize the "foreground" process.
When running a parallel build, redo-log -f (which is auto-started by
redo) tries to traverse through the logs depth first, in the order
parent processes started subprocesses. This works pretty well, but if
its dependencies are locked, a process might have to give up its
jobserver token while other stuff builds its dependencies. After the
dependency finishes, the parent might not be able to get a token for
quite some time, and the logs will appear to stop.
To prevent this from happening, we can instantiate up to one "cheater"
token, only in the foreground process (the one locked by redo-log -f),
which will allow it to continue running, albeit a bit slowly (since it
only has one token out of possibly many). When the process finishes,
we then destroy the fake token. It gets a little complicated; see
explanation at the top of jwack.py.
2018-11-17 04:32:09 -05:00
|
|
|
assert jwack._mytokens == 0
|
|
|
|
|
jwack.ensure_token_or_cheat('self', cheat)
|
2010-11-21 23:33:11 -08:00
|
|
|
# at this point, we don't have any children holding any tokens, so
|
|
|
|
|
# it's okay to block below.
|
2010-11-21 07:10:48 -08:00
|
|
|
if retcode[0] and not vars.KEEP_GOING:
|
|
|
|
|
break
|
2010-11-19 07:21:09 -08:00
|
|
|
if locked:
|
2010-12-09 03:01:26 -08:00
|
|
|
if not state.check_sane():
|
|
|
|
|
err('.redo directory disappeared; cannot continue.\n')
|
2010-11-22 03:34:37 -08:00
|
|
|
retcode[0] = 205
|
|
|
|
|
break
|
redo-log: add automated tests, and fix some path bugs revealed by them.
When a log for X was saying it wanted to refer to Y, we used a relative
path, but it was sometimes relative to the wrong starting location, so
redo-log couldn't find it later.
Two examples:
- if default.o.do is handling builds for a/b/x.o, and default.o.do
does 'redo a/b/x.h', the log for x.o should refer to ./x.h, not
a/b/x.h.
- if foo.do is handling builds for foo, and it does
"cd a/b && redo x", the log for foo should refer to a/b/x, not just
x.
2018-11-19 17:09:40 -05:00
|
|
|
fid,t,fname = locked.pop(0)
|
2010-12-10 02:58:13 -08:00
|
|
|
lock = state.Lock(fid)
|
2018-10-29 07:21:12 +00:00
|
|
|
backoff = 0.01
|
2010-12-10 04:31:22 -08:00
|
|
|
lock.trylock()
|
2010-12-10 23:04:46 -08:00
|
|
|
while not lock.owned:
|
2018-10-29 07:21:12 +00:00
|
|
|
# Don't spin with 100% CPU while we fight for the lock.
|
|
|
|
|
import random
|
|
|
|
|
time.sleep(random.random() * min(backoff, 1.0))
|
|
|
|
|
backoff *= 2
|
redo-log: prioritize the "foreground" process.
When running a parallel build, redo-log -f (which is auto-started by
redo) tries to traverse through the logs depth first, in the order
parent processes started subprocesses. This works pretty well, but if
its dependencies are locked, a process might have to give up its
jobserver token while other stuff builds its dependencies. After the
dependency finishes, the parent might not be able to get a token for
quite some time, and the logs will appear to stop.
To prevent this from happening, we can instantiate up to one "cheater"
token, only in the foreground process (the one locked by redo-log -f),
which will allow it to continue running, albeit a bit slowly (since it
only has one token out of possibly many). When the process finishes,
we then destroy the fake token. It gets a little complicated; see
explanation at the top of jwack.py.
2018-11-17 04:32:09 -05:00
|
|
|
# after printing this line, redo-log will recurse into t,
|
|
|
|
|
# whether it's us building it, or someone else.
|
redo-log: add automated tests, and fix some path bugs revealed by them.
When a log for X was saying it wanted to refer to Y, we used a relative
path, but it was sometimes relative to the wrong starting location, so
redo-log couldn't find it later.
Two examples:
- if default.o.do is handling builds for a/b/x.o, and default.o.do
does 'redo a/b/x.h', the log for x.o should refer to ./x.h, not
a/b/x.h.
- if foo.do is handling builds for foo, and it does
"cd a/b && redo x", the log for foo should refer to a/b/x, not just
x.
2018-11-19 17:09:40 -05:00
|
|
|
meta('waiting', state.target_relpath(t))
|
2016-11-27 23:35:28 -08:00
|
|
|
try:
|
Cyclic dependency checker: don't give up token in common case.
The way the code was written, we'd give up our token, detect a cyclic
dependency, and then try to get our token back before exiting. Even
with -j1, the temporary token release allowed any parent up the tree to
continue running jobs, so it would take an arbitrary amount of time
before we could exit (and report an error code to the parent).
There was no visible symptom of this except that, with -j1, t/355-deps-cyclic
would not finish until some of the later tests finished, which was
surprising.
To fix it, let's just check for a cyclic dependency first, then release
the token only once we're sure things are sane.
2018-11-13 06:54:31 -05:00
|
|
|
lock.check()
|
2016-11-27 23:35:28 -08:00
|
|
|
except state.CyclicDependencyError:
|
|
|
|
|
err('cyclic dependency while building %s\n' % _nice(t))
|
|
|
|
|
retcode[0] = 208
|
|
|
|
|
return retcode[0]
|
Cyclic dependency checker: don't give up token in common case.
The way the code was written, we'd give up our token, detect a cyclic
dependency, and then try to get our token back before exiting. Even
with -j1, the temporary token release allowed any parent up the tree to
continue running jobs, so it would take an arbitrary amount of time
before we could exit (and report an error code to the parent).
There was no visible symptom of this except that, with -j1, t/355-deps-cyclic
would not finish until some of the later tests finished, which was
surprising.
To fix it, let's just check for a cyclic dependency first, then release
the token only once we're sure things are sane.
2018-11-13 06:54:31 -05:00
|
|
|
# this sequence looks a little silly, but the idea is to
|
|
|
|
|
# give up our personal token while we wait for the lock to
|
redo-log: prioritize the "foreground" process.
When running a parallel build, redo-log -f (which is auto-started by
redo) tries to traverse through the logs depth first, in the order
parent processes started subprocesses. This works pretty well, but if
its dependencies are locked, a process might have to give up its
jobserver token while other stuff builds its dependencies. After the
dependency finishes, the parent might not be able to get a token for
quite some time, and the logs will appear to stop.
To prevent this from happening, we can instantiate up to one "cheater"
token, only in the foreground process (the one locked by redo-log -f),
which will allow it to continue running, albeit a bit slowly (since it
only has one token out of possibly many). When the process finishes,
we then destroy the fake token. It gets a little complicated; see
explanation at the top of jwack.py.
2018-11-17 04:32:09 -05:00
|
|
|
# be released; but we should never run ensure_token() while
|
Cyclic dependency checker: don't give up token in common case.
The way the code was written, we'd give up our token, detect a cyclic
dependency, and then try to get our token back before exiting. Even
with -j1, the temporary token release allowed any parent up the tree to
continue running jobs, so it would take an arbitrary amount of time
before we could exit (and report an error code to the parent).
There was no visible symptom of this except that, with -j1, t/355-deps-cyclic
would not finish until some of the later tests finished, which was
surprising.
To fix it, let's just check for a cyclic dependency first, then release
the token only once we're sure things are sane.
2018-11-13 06:54:31 -05:00
|
|
|
# holding a lock, or we could cause deadlocks.
|
|
|
|
|
jwack.release_mine()
|
|
|
|
|
lock.waitlock()
|
redo-log: prioritize the "foreground" process.
When running a parallel build, redo-log -f (which is auto-started by
redo) tries to traverse through the logs depth first, in the order
parent processes started subprocesses. This works pretty well, but if
its dependencies are locked, a process might have to give up its
jobserver token while other stuff builds its dependencies. After the
dependency finishes, the parent might not be able to get a token for
quite some time, and the logs will appear to stop.
To prevent this from happening, we can instantiate up to one "cheater"
token, only in the foreground process (the one locked by redo-log -f),
which will allow it to continue running, albeit a bit slowly (since it
only has one token out of possibly many). When the process finishes,
we then destroy the fake token. It gets a little complicated; see
explanation at the top of jwack.py.
2018-11-17 04:32:09 -05:00
|
|
|
# now t is definitely free, so we get to decide whether
|
|
|
|
|
# to build it.
|
2010-12-10 23:04:46 -08:00
|
|
|
lock.unlock()
|
redo-log: prioritize the "foreground" process.
When running a parallel build, redo-log -f (which is auto-started by
redo) tries to traverse through the logs depth first, in the order
parent processes started subprocesses. This works pretty well, but if
its dependencies are locked, a process might have to give up its
jobserver token while other stuff builds its dependencies. After the
dependency finishes, the parent might not be able to get a token for
quite some time, and the logs will appear to stop.
To prevent this from happening, we can instantiate up to one "cheater"
token, only in the foreground process (the one locked by redo-log -f),
which will allow it to continue running, albeit a bit slowly (since it
only has one token out of possibly many). When the process finishes,
we then destroy the fake token. It gets a little complicated; see
explanation at the top of jwack.py.
2018-11-17 04:32:09 -05:00
|
|
|
jwack.ensure_token_or_cheat(t, cheat)
|
2010-12-10 23:04:46 -08:00
|
|
|
lock.trylock()
|
2010-11-19 07:21:09 -08:00
|
|
|
assert(lock.owned)
|
redo-log: add automated tests, and fix some path bugs revealed by them.
When a log for X was saying it wanted to refer to Y, we used a relative
path, but it was sometimes relative to the wrong starting location, so
redo-log couldn't find it later.
Two examples:
- if default.o.do is handling builds for a/b/x.o, and default.o.do
does 'redo a/b/x.h', the log for x.o should refer to ./x.h, not
a/b/x.h.
- if foo.do is handling builds for foo, and it does
"cd a/b && redo x", the log for foo should refer to a/b/x, not just
x.
2018-11-19 17:09:40 -05:00
|
|
|
meta('unlocked', state.target_relpath(t))
|
2010-12-10 20:53:31 -08:00
|
|
|
if state.File(name=t).is_failed():
|
2010-11-21 03:34:32 -08:00
|
|
|
err('%s: failed in another thread\n' % _nice(t))
|
2010-11-19 07:21:09 -08:00
|
|
|
retcode[0] = 2
|
|
|
|
|
lock.unlock()
|
|
|
|
|
else:
|
2010-12-10 02:58:13 -08:00
|
|
|
BuildJob(t, state.File(id=fid), lock,
|
|
|
|
|
shouldbuildfunc, done).start()
|
2015-01-15 16:13:35 -05:00
|
|
|
lock = None
|
2010-12-09 02:44:33 -08:00
|
|
|
state.commit()
|
2010-11-19 07:21:09 -08:00
|
|
|
return retcode[0]
|