How to improve speed of a single file transfer over an unstable
Scott C. Kennedy
sck at nogas.org
Fri Dec 29 20:03:52 GMT 2006
Long time fan and user of rsync, with a question. First some background...
What I'm trying to do is copy a 21GB backup file from my parent's house to
my home to help them with their new computer. But the link is sporadic,
thus a continous copy will not succeed. The session dies after 25mins - 2
Thus, I've scripted the following script 'get_me.sh'
mv .file.bkf.* ./file.bkf
rsync --timeout 90 user at remote:/dir/file.bkf ./file.bkf
So, the script moves the temp file created by rsync onto the file itself,
then calls rsync to continue sync'ing, and then after rsync losses it's
connection, the script calls itself and the cycle starts again.
Not very elegant but it's working. Sort of. I'm now starting to decrease
the overall throughput of the transfer since I keep checking to make sure
that the data is still the same on both side, so here's the question...
Would the "append" flag work well for this situation? I'd normally try a
few tests myself, but according to my data, it'll be at least another 4
days until the file is finished, and my parent's leave in 5 days. So, I'm
a little hesitant to "experiment" on the transfer in progress.
PS> I am joining the mailing list but, please cc me on replies for the
More information about the rsync