[ Jason L Tibbitts III writes: ]
> OK, I've basically come to the conclusion that for the kind of
> locking that Majordomo2 needs to do, flock sucks. I can make it work
> flawlessly, perfectly, beautifully and without starvation, but I
> have to let it write little lockfiles all over the place and these
> lockfiles must persist essentially in perpetuity. This is really
That's essentially what I did in my alternative Mj1 shlock.pl. I used
a .LOCK subdirectory wherever I needed to lock files and just left the
lock files around since it was more work to remove and create them all
the time, although it does create lock files when it needs them (it just
doesn't ever remove them).
For those that aren't aware of the issues, the big caveat on using
Perl's flock function is that it does different things depending on how
perl was configured, all the way from getting a fatal error (locking not
configured, e.g. early Win32 ports) to using one of flock(2), fcntl(2),
or lockf(3). On top of that, various systems sometimes implement those
functions on top of one of the others, e.g. lockf using fcntl for
implementation. Pain results because flock and fcntl have different
semantics even though Perl unifies the syntax, e.g. flock allows locks
on filehandles opened read-only but fcntl doesn't, flock works only on
local file systems whereas fcntl (usually) works over NFS (for some
definition of "works" depending on the NFS implementation for both the
server and client systems).
> I've come to the conclusion that this is really a pain in the rear for
> what should be a simple thing, and would like to look at alternatives.
At some abstract level, cooperative resource locking, actually resource
sharing, boils down to interprocess communication. While there are
a number of ways to do that, picking one that's portable across a
plethora of systems limits the choices. They are further limited if the
communicating processes exist in physically separate environments, i.e.
different machines sharing a common file system. The key element is to
use an IPC method that supports an atomic operation, i.e. a test-and-set
operation, so that there is no opportunity for any ambiguity about which
process has the lock.
Since DBMSs were mentioned, one could do a similar trick and have a Mj
lock daemon that serialized resource accesses through a network socket.
It could even look for deadlocks. Probably pretty high overhead and
would probably have to use some persistent method of recording locks to
recover after crashes, similar to how file locking is implemented over
NFS, to really be useful over a network. By then you have to wonder
why you aren't using built-in file locking in the first place.
Directory creation has been claimed to be an atomic operation even over
NFS and is preferred by some in place of lock files. It does require
some sort of recovery scheme, which is significantly complicated by use
over a network.
Maybe just locking token lock files in an out-of-the-way place isn't so
bad after all. You only need to remove the ones that have no matching
data file (and haven't been created within the previous N time units?)
and then only when you feel like it, i.e. once a day, once a week, etc.