Raw logs contain @@REDO lines instead of formatted data.
This makes them more reliable to parse. redo-log can parse each line,
format and print it, then recurse if necessary. This got a little ugly
because I wanted 'redo --raw-logs' to work, which we want to format the
output nicely, but not call redo-log.
(As a result, --raw-logs has a different meaning to redo and
redo-log, which is kinda dumb. I should fix that.)
As an added bonus, redo-log now handles indenting of recursive logs, so
if the build was a -> a/b -> a/b/c, and you look at the log for a/b, it
can still start at the top level indentation.
2018-11-13 04:05:42 -05:00
|
|
|
import os, re, sys, time
|
|
|
|
|
import vars
|
|
|
|
|
|
|
|
|
|
|
redo-log: prioritize the "foreground" process.
When running a parallel build, redo-log -f (which is auto-started by
redo) tries to traverse through the logs depth first, in the order
parent processes started subprocesses. This works pretty well, but if
its dependencies are locked, a process might have to give up its
jobserver token while other stuff builds its dependencies. After the
dependency finishes, the parent might not be able to get a token for
quite some time, and the logs will appear to stop.
To prevent this from happening, we can instantiate up to one "cheater"
token, only in the foreground process (the one locked by redo-log -f),
which will allow it to continue running, albeit a bit slowly (since it
only has one token out of possibly many). When the process finishes,
we then destroy the fake token. It gets a little complicated; see
explanation at the top of jwack.py.
2018-11-17 04:32:09 -05:00
|
|
|
def check_tty(file):
|
Raw logs contain @@REDO lines instead of formatted data.
This makes them more reliable to parse. redo-log can parse each line,
format and print it, then recurse if necessary. This got a little ugly
because I wanted 'redo --raw-logs' to work, which we want to format the
output nicely, but not call redo-log.
(As a result, --raw-logs has a different meaning to redo and
redo-log, which is kinda dumb. I should fix that.)
As an added bonus, redo-log now handles indenting of recursive logs, so
if the build was a -> a/b -> a/b/c, and you look at the log for a/b, it
can still start at the top level indentation.
2018-11-13 04:05:42 -05:00
|
|
|
global RED, GREEN, YELLOW, BOLD, PLAIN
|
redo-log: prioritize the "foreground" process.
When running a parallel build, redo-log -f (which is auto-started by
redo) tries to traverse through the logs depth first, in the order
parent processes started subprocesses. This works pretty well, but if
its dependencies are locked, a process might have to give up its
jobserver token while other stuff builds its dependencies. After the
dependency finishes, the parent might not be able to get a token for
quite some time, and the logs will appear to stop.
To prevent this from happening, we can instantiate up to one "cheater"
token, only in the foreground process (the one locked by redo-log -f),
which will allow it to continue running, albeit a bit slowly (since it
only has one token out of possibly many). When the process finishes,
we then destroy the fake token. It gets a little complicated; see
explanation at the top of jwack.py.
2018-11-17 04:32:09 -05:00
|
|
|
if file.isatty() and (os.environ.get('TERM') or 'dumb') != 'dumb':
|
Raw logs contain @@REDO lines instead of formatted data.
This makes them more reliable to parse. redo-log can parse each line,
format and print it, then recurse if necessary. This got a little ugly
because I wanted 'redo --raw-logs' to work, which we want to format the
output nicely, but not call redo-log.
(As a result, --raw-logs has a different meaning to redo and
redo-log, which is kinda dumb. I should fix that.)
As an added bonus, redo-log now handles indenting of recursive logs, so
if the build was a -> a/b -> a/b/c, and you look at the log for a/b, it
can still start at the top level indentation.
2018-11-13 04:05:42 -05:00
|
|
|
# ...use ANSI formatting codes.
|
|
|
|
|
RED = "\x1b[31m"
|
|
|
|
|
GREEN = "\x1b[32m"
|
|
|
|
|
YELLOW = "\x1b[33m"
|
|
|
|
|
BOLD = "\x1b[1m"
|
|
|
|
|
PLAIN = "\x1b[m"
|
|
|
|
|
else:
|
|
|
|
|
RED = ""
|
|
|
|
|
GREEN = ""
|
|
|
|
|
YELLOW = ""
|
|
|
|
|
BOLD = ""
|
|
|
|
|
PLAIN = ""
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
class RawLog(object):
|
|
|
|
|
def __init__(self, file):
|
|
|
|
|
self.file = file
|
|
|
|
|
|
|
|
|
|
def write(self, s):
|
|
|
|
|
assert('\n' not in s)
|
|
|
|
|
sys.stdout.flush()
|
|
|
|
|
sys.stderr.flush()
|
|
|
|
|
self.file.write(s + '\n')
|
|
|
|
|
self.file.flush()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
REDO_RE = re.compile(r'@@REDO:([^@]+)@@ (.*)$')
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
class PrettyLog(object):
|
|
|
|
|
def __init__(self, file):
|
|
|
|
|
self.file = file
|
|
|
|
|
|
|
|
|
|
def _pretty(self, pid, color, s):
|
|
|
|
|
if vars.DEBUG_PIDS:
|
|
|
|
|
redo = '%d redo ' % pid
|
|
|
|
|
else:
|
|
|
|
|
redo = 'redo '
|
|
|
|
|
self.file.write(''.join([color, redo, vars.DEPTH,
|
|
|
|
|
BOLD if color else '', s, PLAIN, '\n']))
|
|
|
|
|
|
|
|
|
|
def write(self, s):
|
|
|
|
|
assert('\n' not in s)
|
|
|
|
|
sys.stdout.flush()
|
|
|
|
|
sys.stderr.flush()
|
|
|
|
|
g = REDO_RE.match(s)
|
|
|
|
|
if g:
|
|
|
|
|
all = g.group(0)
|
|
|
|
|
self.file.write(s[:-len(all)])
|
|
|
|
|
words = g.group(1).split(':')
|
|
|
|
|
text = g.group(2)
|
|
|
|
|
kind, pid, when = words[0:3]
|
redo-log: prioritize the "foreground" process.
When running a parallel build, redo-log -f (which is auto-started by
redo) tries to traverse through the logs depth first, in the order
parent processes started subprocesses. This works pretty well, but if
its dependencies are locked, a process might have to give up its
jobserver token while other stuff builds its dependencies. After the
dependency finishes, the parent might not be able to get a token for
quite some time, and the logs will appear to stop.
To prevent this from happening, we can instantiate up to one "cheater"
token, only in the foreground process (the one locked by redo-log -f),
which will allow it to continue running, albeit a bit slowly (since it
only has one token out of possibly many). When the process finishes,
we then destroy the fake token. It gets a little complicated; see
explanation at the top of jwack.py.
2018-11-17 04:32:09 -05:00
|
|
|
pid = int(pid)
|
Raw logs contain @@REDO lines instead of formatted data.
This makes them more reliable to parse. redo-log can parse each line,
format and print it, then recurse if necessary. This got a little ugly
because I wanted 'redo --raw-logs' to work, which we want to format the
output nicely, but not call redo-log.
(As a result, --raw-logs has a different meaning to redo and
redo-log, which is kinda dumb. I should fix that.)
As an added bonus, redo-log now handles indenting of recursive logs, so
if the build was a -> a/b -> a/b/c, and you look at the log for a/b, it
can still start at the top level indentation.
2018-11-13 04:05:42 -05:00
|
|
|
if kind == 'unchanged':
|
redo-log: prioritize the "foreground" process.
When running a parallel build, redo-log -f (which is auto-started by
redo) tries to traverse through the logs depth first, in the order
parent processes started subprocesses. This works pretty well, but if
its dependencies are locked, a process might have to give up its
jobserver token while other stuff builds its dependencies. After the
dependency finishes, the parent might not be able to get a token for
quite some time, and the logs will appear to stop.
To prevent this from happening, we can instantiate up to one "cheater"
token, only in the foreground process (the one locked by redo-log -f),
which will allow it to continue running, albeit a bit slowly (since it
only has one token out of possibly many). When the process finishes,
we then destroy the fake token. It gets a little complicated; see
explanation at the top of jwack.py.
2018-11-17 04:32:09 -05:00
|
|
|
self._pretty(pid, '', '%s (unchanged)' % text)
|
Raw logs contain @@REDO lines instead of formatted data.
This makes them more reliable to parse. redo-log can parse each line,
format and print it, then recurse if necessary. This got a little ugly
because I wanted 'redo --raw-logs' to work, which we want to format the
output nicely, but not call redo-log.
(As a result, --raw-logs has a different meaning to redo and
redo-log, which is kinda dumb. I should fix that.)
As an added bonus, redo-log now handles indenting of recursive logs, so
if the build was a -> a/b -> a/b/c, and you look at the log for a/b, it
can still start at the top level indentation.
2018-11-13 04:05:42 -05:00
|
|
|
elif kind == 'check':
|
|
|
|
|
self._pretty(pid, GREEN, '(%s)' % text)
|
|
|
|
|
elif kind == 'do':
|
|
|
|
|
self._pretty(pid, GREEN, text)
|
|
|
|
|
elif kind == 'done':
|
|
|
|
|
rv, name = text.split(' ', 1)
|
|
|
|
|
rv = int(rv)
|
|
|
|
|
if rv:
|
|
|
|
|
self._pretty(pid, RED, '%s (exit %d)' % (name, rv))
|
|
|
|
|
elif vars.VERBOSE or vars.XTRACE or vars.DEBUG:
|
|
|
|
|
self._pretty(pid, GREEN, '%s (done)' % name)
|
|
|
|
|
self.file.write('\n')
|
|
|
|
|
elif kind == 'locked':
|
|
|
|
|
if vars.DEBUG_LOCKS:
|
|
|
|
|
self._pretty(pid, GREEN, '%s (locked...)' % text)
|
|
|
|
|
elif kind == 'waiting':
|
|
|
|
|
if vars.DEBUG_LOCKS:
|
|
|
|
|
self._pretty(pid, GREEN, '%s (WAITING)' % text)
|
|
|
|
|
elif kind == 'unlocked':
|
|
|
|
|
if vars.DEBUG_LOCKS:
|
|
|
|
|
self._pretty(pid, GREEN, '%s (...unlocked!)' % text)
|
|
|
|
|
elif kind == 'error':
|
|
|
|
|
self.file.write(''.join([RED, 'redo: ',
|
|
|
|
|
BOLD, text, PLAIN, '\n']))
|
|
|
|
|
elif kind == 'warning':
|
|
|
|
|
self.file.write(''.join([YELLOW, 'redo: ',
|
|
|
|
|
BOLD, text, PLAIN, '\n']))
|
|
|
|
|
elif kind == 'debug':
|
|
|
|
|
self._pretty(pid, '', text)
|
|
|
|
|
else:
|
|
|
|
|
assert 0, 'Unexpected @@REDO kind: %r' % kind
|
|
|
|
|
else:
|
|
|
|
|
self.file.write(s + '\n')
|
|
|
|
|
self.file.flush()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
_log = None
|
|
|
|
|
|
|
|
|
|
def setup(file, pretty):
|
|
|
|
|
global _log
|
2018-11-19 10:55:56 -05:00
|
|
|
if pretty or vars.PRETTY:
|
redo-log: prioritize the "foreground" process.
When running a parallel build, redo-log -f (which is auto-started by
redo) tries to traverse through the logs depth first, in the order
parent processes started subprocesses. This works pretty well, but if
its dependencies are locked, a process might have to give up its
jobserver token while other stuff builds its dependencies. After the
dependency finishes, the parent might not be able to get a token for
quite some time, and the logs will appear to stop.
To prevent this from happening, we can instantiate up to one "cheater"
token, only in the foreground process (the one locked by redo-log -f),
which will allow it to continue running, albeit a bit slowly (since it
only has one token out of possibly many). When the process finishes,
we then destroy the fake token. It gets a little complicated; see
explanation at the top of jwack.py.
2018-11-17 04:32:09 -05:00
|
|
|
check_tty(file)
|
Raw logs contain @@REDO lines instead of formatted data.
This makes them more reliable to parse. redo-log can parse each line,
format and print it, then recurse if necessary. This got a little ugly
because I wanted 'redo --raw-logs' to work, which we want to format the
output nicely, but not call redo-log.
(As a result, --raw-logs has a different meaning to redo and
redo-log, which is kinda dumb. I should fix that.)
As an added bonus, redo-log now handles indenting of recursive logs, so
if the build was a -> a/b -> a/b/c, and you look at the log for a/b, it
can still start at the top level indentation.
2018-11-13 04:05:42 -05:00
|
|
|
_log = PrettyLog(file=file)
|
|
|
|
|
else:
|
|
|
|
|
_log = RawLog(file=file)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
# FIXME: explicitly initialize in each program
|
|
|
|
|
setup(file=sys.stderr, pretty=False)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
def write(s):
|
|
|
|
|
_log.write(s)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
def meta(kind, s):
|
|
|
|
|
_log.meta(kind, s)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
def meta(kind, s):
|
|
|
|
|
assert(':' not in kind)
|
|
|
|
|
assert('@' not in kind)
|
|
|
|
|
assert('\n' not in s)
|
|
|
|
|
write('@@REDO:%s:%d:%.4f@@ %s'
|
|
|
|
|
% (kind, os.getpid(), time.time(), s))
|
|
|
|
|
|
|
|
|
|
def err(s):
|
|
|
|
|
s = s.rstrip()
|
|
|
|
|
meta('error', s)
|
|
|
|
|
|
|
|
|
|
def warn(s):
|
|
|
|
|
s = s.rstrip()
|
|
|
|
|
meta('warning', s)
|
|
|
|
|
|
|
|
|
|
def debug(s):
|
|
|
|
|
if vars.DEBUG >= 1:
|
|
|
|
|
s = s.rstrip()
|
|
|
|
|
meta('debug', s)
|
|
|
|
|
|
|
|
|
|
def debug2(s):
|
|
|
|
|
if vars.DEBUG >= 2:
|
|
|
|
|
s = s.rstrip()
|
|
|
|
|
meta('debug', s)
|
|
|
|
|
|
|
|
|
|
def debug3(s):
|
|
|
|
|
if vars.DEBUG >= 3:
|
|
|
|
|
s = s.rstrip()
|
|
|
|
|
meta('debug', s)
|