How to improve speed of a single file transfer over an unstable
link?
Paul Slootman
paul at debian.org
Fri Dec 29 20:37:19 GMT 2006
On Fri 29 Dec 2006, Scott C. Kennedy wrote:
>
> Thus, I've scripted the following script 'get_me.sh'
>
> #!/bin/sh
> mv .file.bkf.* ./file.bkf
> rsync --timeout 90 user at remote:/dir/file.bkf ./file.bkf
> ./getme.sh
>
> So, the script moves the temp file created by rsync onto the file itself,
> then calls rsync to continue sync'ing, and then after rsync losses it's
> connection, the script calls itself and the cycle starts again.
You do know about the --partial option? That basically takes care of
this... although I'm wondering why your rsync doesn't delete the
tmpfile after the transfer is interrupted.
Using --inplace may also be useful.
> Not very elegant but it's working. Sort of. I'm now starting to decrease
> the overall throughput of the transfer since I keep checking to make sure
> that the data is still the same on both side, so here's the question...
rsync will check the existing data upon the start of each transfer,
unless...
> Would the "append" flag work well for this situation? I'd normally try a
> few tests myself, but according to my data, it'll be at least another 4
> days until the file is finished, and my parent's leave in 5 days. So, I'm
> a little hesitant to "experiment" on the transfer in progress.
the --append option will assume that the partial data there is correct,
only too short. This is a good idea, as it saves reading the already
transported data, saving time. And --append implies --inplace.
Paul Slootman
More information about the rsync
mailing list