[clug] Only one instance of this shell script at a time, please

Paul Wayper paulway at mabula.net
Wed Feb 19 02:21:47 MST 2014

Hash: SHA1

Hi all,

I've just used a neat trick I learnt from the interwebs to make sure that
only one of my backup processes runs at any one time.  Otherwise, the first
rsync process starts up, and if it's still going a day later (easy with
large file, renames of entire directories while not using fuzzy matching,
bandwidth limitation to avoid unduly impinging on my remote site's download
speed, and relatively slow internet speeds) then another one comes along and
starts the same process, often at the same file.

It looks like this:

- -8<---------------------------------

flock -n -e 99 || {
	echo "Cannot get lock on $0: another process running?"
	exit 1

echo long running process $$ starting
sleep 10
echo long running process $$ finishing

) 99< $0

- -8<---------------------------------

I generally dislike lock files because then you have more cleaning to do and
you can easily clobber the new lock file with the old one if you're not
careful.  So what this does is open a subshell, and (right at the end of the
file) pipes the source of the script into file descriptor 99 inside that
subshell.  Then we use flock to try to get exclusive access (-e) to file
descriptor 99, and fail immediately if that doesn't work (-n) - so they
don't queue up.  If the flock fails, then we print a useful message and exit
- - but not in another subshell (which would just exit the subshell), in a
code block.

Then you can do whatever you want in there, taking as long as you like.  The
kernel knows that you've asked for exclusive access to that script (as input
to the subshell), and won't let two processes lock it.  As soon as one
finishes, the lock is cleared because the file is closed and a new backup
will start the next night.

Hope this helps,

Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/


More information about the linux mailing list