rsync 1tb+ each day

Eric Whiting ewhiting at
Wed Feb 5 06:55:19 EST 2003

Replying to self after re-reading the original message...

-W will probably help in that it disables the incremental checksum block 
checking/scanning for the very large files. This is a good option to 
consider if you have a very fast network.

rsync with -W will still probably create the .dest file and will not do 
the file create/sync in place. (I might be wrong)

I have some 2+G domino nsf files that I sync every day using rsync -- I 
have not seen the incremental checksum block checking helping much on 
these files either -- I think I'll try -W on that sync.... It might help 
in sync time but hurt in terms of network loading.

I think some have suggested different -B options for larger files as 
well  -- but I'm not sure about what might work best with oracle 
datafiles -- probably a -B that is the same size as the db_block_size.


Eric Whiting wrote:

> I think the -W option might do what you would have described here.
> eric
> Kenny Gorman wrote:
>> I am rsyncing 1tb of data each day.  I am finding in my testing that 
>> actually removing the target files each day then rsyncing is faster 
>> than doing a compare of the source->target files then rsyncing over 
>> the delta blocks.  This is because we have a fast link between the 
>> two boxes, and that are disk is fairly slow. I am finding that the 
>> creation of the temp file (the 'dot file') is actually the slowest 
>> part of the operation. This has to be done for each file because the 
>> timestamp and at least a couple blocks are guaranteed to have changed 
>> (oracle files).
>> My question is this:
>> Is it possible to tell rsync to update the blocks of the target file 
>> 'in-place' without creating the temp file (the 'dot file')?  I can 
>> guarantee that no other operations are being performed on the file at 
>> the same time.  The docs don't seem to indicate such an option.
>> Thx in advance..
>> -kg

More information about the rsync mailing list