Transfering very large files / and restarting failures

Todd Papaioannou lucky at
Wed Jul 27 23:29:46 GMT 2005


Thanks for the swift answers and insight.  

> > However, the stats shown during the progress seem to imply that the 
> > whole transfer is starting again.
> Yes, that's what rsync does.   It retransfers the whole file, but it
> uses the local data to make the amount of data flowing over 
> the socket (or pipe) smaller.  The already-sent data is thus 
> coming from the original, partially-transferred file rather 
> than coming from the sender (which would lower the network 
> bandwidth if this were a remote connection).

Hmm, OK. I guess my mental model of what rsync does is wrong.
If I read this correctly, I'm doing a local to local copy then I get no
benefit from re-using the partial copy. If however, I were doing
a remote copy, I would definitely get a benefit.

> > In the future /path/to/dest will be an NFS mount.
> You don't want to do that unless you're network speed is 
> higher than your disk speed -- with slower net speeds you are 
> better off rsyncing directly to the remote machine that is 
> the source of the NFS mount so that rsync can reduce the 
> amount of data it is sending.  With higher net speeds you're 
> better off just transferring the data via --whole-file and 
> not using --partial.  One other possibility is the --append 
> option from the patch named patches/append.diff -- this 
> implements a more efficient append mode for incremental 
> transfers (I'm considering adding this to the next version of rsync).

Ahh, that sounds like what I'm looking for. I was hoping rsync
supported something like ftp restart, which would restart the file
transfer down to the byte level. I'll give it a look. Not sure I have the 
mojo to mess with the patches though! 

By the way, is there another protocol you might know of, other than ftp
that supports byte level restart/append? 



More information about the rsync mailing list