Options for a "I'm done" flag file
mark0x01 at gmail.com
Wed Apr 29 04:14:36 MDT 2015
You could try increasing the timeout delay, rather than resume.
rsync will tolerate quite long network dropouts and still carry on.
I have managed to keep an internet transfer of up to 100Gb alive for two
I didn't find --partial to be much use for very large scale transfers
due to the very cpu intensive checksum process.
By large scale I have rsync'd several Petabytes of backup files up to
500Gb size over the last five years with good success.
On 29/04/2015 2:49 a.m., Simon Hobson wrote:
> As an aside to this, part of the problem I've been having is the transfer timing out/getting interrupted during a particular large file (1G, new file, 2-3 hours if it works).
> So I've been experimenting with --partial and --partial-dir=.rsync-partial which weren't working. It appears to work at first - if the transfer is interrupted, the partial file is correctly saved in the named directory.
> Then if I run the script again, it deletes the partial file before starting again.
> I found that I needed to also specify --delete-delay to avoid deleting the partial file before it's used.
> Is this "known", because it isn't implied (as I read it) by the --partial-dir section in the man page ?
More information about the rsync