Dealing with an unreliable remote

Tony Abernethy tony at servasoftware.com
Tue Nov 25 09:27:13 MST 2014


(until a better answer comes along)
The killed rsync process should leave a temporary file .<file-name>.<random>
If you rename the temporary to the real file name, rsync should continue
from about where it left off.

-----Original Message-----
From: rsync-bounces at lists.samba.org [mailto:rsync-bounces at lists.samba.org]
On Behalf Of net.rsync at io7m.com
Sent: Tuesday, November 25, 2014 9:03 AM
To: rsync at lists.samba.org
Subject: Dealing with an unreliable remote

'Lo.

I've run into a frustrating issue when trying to synchronize a
directory hierarchy over a reliable (but slow) connection to an 
unreliable remote. Basically, I have the following:

  http://mvn-repository.io7m.com/com/io7m/

This is a set of nested directories containing binaries and sources for
projects I develop/maintain. Every time a new release is made, I deploy
the binaries and sources to an exact copy of the above hierarchy on my
local machine, and then rsync that (over SSH) to
mvn-repository.io7m.com.

  $ rsync -avz --delete --progress local/
io7m.com:/home/io7m/mvn-repository.io7m.com/

The problem:

The latest project produces one .jar file that's about 80mb.
Evidently, the hosting provider I use for io7m.com is using some sort
of process tracking system that kills processes that have been running
for too long (I think it just measures CPU time). The result of this is
that I get about 50% of the way through copying that
(comparatively) large file, and then the remote rsync process is
suddenly killed because it has been running for too long.

This would be fine, except that it seems that rsync is utterly refusing
all efforts to continue copying that file from wherever it left off. It
always restarts copying of the file from nothing and tries to copy the
full 80mb, resulting it being killed halfway through and causing much
grinding of teeth.

The documentation for --partial states that "Using the --partial option
tells rsync to keep the partial file which should make a subsequent
transfer of the rest of the file much faster.". Well, for whatever
reason, it doesn't (or it at least fails to continue using it).

I've tried --partial-dir, passing it an absolute path to the temporary
directory in my home directory. It created a file in there the first time, 
but after being killed by the remote side and restarting, it ignored
that file and instead created a new temporary file (with a random suffix) 
in the destination directory! Am I doing something wrong?

  $ rsync -avz --delete --progress --partial-dir=/home/io7m/tmp/rsync
io7m.com:/home/io7m/mvn-repository.io7m.com/

I'm at a loss. How can I reliably get this directory hierarchy up onto
the server? I don't care if I have to retry the command multiple times
until the copy has fully succeeded, but I obviously can't do that if
rsync keeps restarting the failed file from scratch every time.

M
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options:
https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html



More information about the rsync mailing list