rsync big file
Matt McCutchen
hashproduct+rsync at gmail.com
Tue Jan 23 01:44:11 GMT 2007
On 1/22/07, Richard Z <rzheng at gmail.com> wrote:
> I have a cron job to rsync one directory from one server to another
> every 5 minutes. There are some really big files. So it happens when
> the second instance of rsync tries to copy the same file when the
> first instance is not done yet. It drives my CPU crazy on the source
> server. Is there a way to avoid this problem?
Make the job do nothing if a previous instance of the job is still
running. To accomplish that, use a lock file. For example, if you
have flock(1), you could use the command:
flock --exclusive --nonblock /path/to/the/lockfile rsync ARGUMENTS...
Or in bash, you could use this code, which has a race condition that
won't be a problem when the job starts every few minutes:
if ! [ -f /path/to/the/lockfile ]; then
touch /path/to/the/lockfile
rsync ARGUMENTS...
rm -f /path/to/the/lockfile
fi
Matt
More information about the rsync
mailing list