Assert that one instance never holds multiple locks on the same file at once.
This could happen if you did 'redo foo foo'. Which nobody ever did, I think, but let's make sure we catch it if they do. One problem with having multiple locks on the same file is then you have to remember not to *unlock* it until they're all done. But there are other problems, such as: why the heck did we think it was a good idea to lock the same file more than once? So just prevent it from happening for now, unless/until we somehow come up with a reason it might be a good idea.
This commit is contained in:
parent
f5eabe61d2
commit
294945bd0f
2 changed files with 11 additions and 4 deletions
6
state.py
6
state.py
|
|
@ -291,13 +291,17 @@ class File(object):
|
|||
# fcntl.lockf() instead. Usually this is just a wrapper for fcntl, so it's
|
||||
# ok, but it doesn't have F_GETLK, so we can't report which pid owns the lock.
|
||||
# The makes debugging a bit harder. When we someday port to C, we can do that.
|
||||
_locks = {}
|
||||
class Lock:
|
||||
def __init__(self, fid):
|
||||
assert(_lockfile >= 0)
|
||||
self.owned = False
|
||||
self.fid = fid
|
||||
assert(_lockfile >= 0)
|
||||
assert(_locks.get(fid,0) == 0)
|
||||
_locks[fid] = 1
|
||||
|
||||
def __del__(self):
|
||||
_locks[self.fid] = 0
|
||||
if self.owned:
|
||||
self.unlock()
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue