How can I ensure that only one instance of a script is running at a time (mutual exclusion, locking)?
We need some means of mutual exclusion. One way is to use a "lock": any number of processes can try to acquire the lock simultaneously, but only one of them will succeed.
How can we implement this using shell scripts? Some people suggest creating a lock file, and checking for its presence:
This example does not work, because there is a RaceCondition: a time window between checking and creating the file, during which other programs may act. Assume two processes are running this code at the same time. Both check if the lockfile exists, and both get the result that it does not exist. Now both processes assume they have acquired the lock -- a disaster waiting to happen. We need an atomic check-and-create operation, and fortunately there is one: mkdir, the command to create a directory:
1 # locking example -- CORRECT 2 # Bourne 3 lockdir=/tmp/myscript.lock 4 if mkdir "$lockdir" 5 then # directory did not exist, but was created successfully 6 printf >&2 'successfully acquired lock: %s\n' "$lockdir" 7 # continue script 8 else 9 printf >&2 'cannot acquire lock, giving up on %s\n' "$lockdir" 10 exit 0 11 fi
Here, even when two processes call mkdir at the same time, only one process can succeed at most. This atomicity of check-and-create is ensured at the operating system kernel level.
Instead of using mkdir we could also have used the program to create a symbolic link, ln -s. A third possibility is to have the program delete a preexisting lock file with rm. The lock is released by recreating the file on exit.
Note that we cannot use mkdir -p to automatically create missing path components: mkdir -p does not return an error if the directory exists already, but that's the feature we rely upon to ensure mutual exclusion.
Now let's spice up this example by automatically removing the lock when the script finishes:
1 # POSIX (maybe Bourne?) 2 lockdir=/tmp/myscript.lock 3 if mkdir "$lockdir" 4 then 5 printf >&2 'successfully acquired lock\n' 6 7 # Remove lockdir when the script finishes, or when it receives a signal 8 trap 'rm -rf "$lockdir"' 0 # remove directory when script finishes 9 10 # Optionally create temporary files in this directory, because 11 # they will be removed automatically: 12 tmpfile=$lockdir/filelist 13 14 else 15 printf >&2 'cannot acquire lock, giving up on %s\n' "$lockdir" 16 exit 0 17 fi
This example is much better. There is still the problem that a stale lock could remain when the script is terminated with a signal not caught (or signal 9, SIGKILL), or could be created by a user (either accidentally or maliciously), but it's a good step towards reliable mutual exclusion. Charles Duffy has contributed an example that may remedy the "stale lock" problem.
If you're using a GNU/Linux distribution, you can also get the benefit of using flock(1). flock(1) ties a FileDescriptor to a lock file. There are multiple ways to use it; one possibility to solve the multiple instance problem is:
flock can also be used to protect only a part of your script, see the man page for more information.
I believe using if (set -C; : >$lockfile); then ... is equally safe if not safer. The Bash source uses open(filename, flags|O_EXCL, mode); which should be atomic on almost all platforms (with the exception of some versions of NFS where mkdir may not be atomic either). I haven't traced the path of the flags variable, which must contain O_CREAT, nor have I looked at any other shells. I wouldn't suggest using this until someone else can backup my claims. --Andy753421
- Using set -C does not work with ksh88. Ksh88 does not use O_EXCL, when you set noclobber (-C). --jrw32982
Are you sure mkdir has problems with being atomic on NFS? I thought that affected only open, but I'm not really sure. -- BeJonas 2008-07-24 01:22:59
Removal of locking mechanism
Shouldn't the example code blocks above include a rm "$lockfile" or rmdir "lockdir" directly after the #...continue script line? - AnthonyGeoghegan
The lock can't be safely removed while the script is still doing its work -- that would allow another instance to run. The longer example includes a trap that removes the lock when the script exits.
flock file descriptor uniqueness
The example uses file descriptor 9 with flock, i.e.
- if ! flock -n 9...
Note, file descriptors are unique per-process. FD 0,1, and 2 are used for stdin,stdout, and stderr so picking a generally high value is sufficient. (source: http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/fdescript.htm )
However, what if this file descriptor is already in use by a completely different process? Are we then locking on the file descriptor and not the lock file? How can we ensure we use something that is not already being used?
For more discussion on these issues, see ProcessManagement.