Assert that one instance never holds multiple locks on the same file at once.

This could happen if you did 'redo foo foo'.  Which nobody ever did, I
think, but let's make sure we catch it if they do.

One problem with having multiple locks on the same file is then you have to
remember not to *unlock* it until they're all done.  But there are other
problems, such as: why the heck did we think it was a good idea to lock the
same file more than once?  So just prevent it from happening for now,
unless/until we somehow come up with a reason it might be a good idea.
This commit is contained in:
Avery Pennarun 2010-12-14 02:19:08 -08:00
commit 294945bd0f
2 changed files with 11 additions and 4 deletions

View file

@ -270,12 +270,13 @@ def main(targets, shouldbuildfunc):
if rv:
retcode[0] = 1
for i in range(len(targets)):
t = targets[i]
# In the first cycle, we just build as much as we can without worrying
# about any lock contention. If someone else has it locked, we move on.
seen = {}
for t in targets:
if t in seen:
continue
seen[t] = 1
if not jwack.has_token():
state.commit()
jwack.get_token(t)
@ -298,6 +299,8 @@ def main(targets, shouldbuildfunc):
else:
BuildJob(t, f, lock, shouldbuildfunc, done).start()
del lock
# Now we've built all the "easy" ones. Go back and just wait on the
# remaining ones one by one. There's no reason to do it any more
# efficiently, because if these targets were previously locked, that

View file

@ -291,13 +291,17 @@ class File(object):
# fcntl.lockf() instead. Usually this is just a wrapper for fcntl, so it's
# ok, but it doesn't have F_GETLK, so we can't report which pid owns the lock.
# The makes debugging a bit harder. When we someday port to C, we can do that.
_locks = {}
class Lock:
def __init__(self, fid):
assert(_lockfile >= 0)
self.owned = False
self.fid = fid
assert(_lockfile >= 0)
assert(_locks.get(fid,0) == 0)
_locks[fid] = 1
def __del__(self):
_locks[self.fid] = 0
if self.owned:
self.unlock()