Switch to using a separate lockfile per target.

The previous method, using fcntl byterange locks, was very efficient and
avoided unnecessarily filesystem metadata churn (ie. creating/deleting
inodes).  Unfortunately, MacOS X (at least version 10.6.5) apparently has a
race condition in its fcntl locking that makes it unusably unreliable
(http://apenwarr.ca/log/?m=201012#13).

My tests indicate that if you only ever lock a *single* byterange on a file,
the race condition doesn't cause a problem.  So let's just use one lockfile
per target.  Now "redo -j20 test" passes for me on both MacOS and Linux.

This doesn't measurably affect the speed on Linux, at least, in my tests.

The bad news: it's hard to safely *delete* those lockfiles when we're done
with them, so they tend to accumulate in the .redo dir.
This commit is contained in:
Avery Pennarun 2010-12-14 02:25:17 -08:00
commit 95680ed7ef
2 changed files with 13 additions and 10 deletions

View file

@ -1,4 +1,9 @@
echo n2-$1
echo $1 >>$1.count
echo $1 >>in.countall
# we deliberately use 'redo' here instead of redo-ifchange, because this *heavily*
# stresses redo's locking when building in parallel. We end up with 100
# different targets that all not only depend on this file, but absolutely must
# acquire the lock on this file, build it atomically, and release the lock.
redo countall