Rsyncing really large files

Wayne Davison wayned at
Sat Mar 5 21:17:29 GMT 2005

On Sat, Mar 05, 2005 at 10:58:12PM +0200, Shachar Shemesh wrote:
> What about my proposal, though? Should I send in a patch so it can be
> evaluated (hoping I'll manage to find my way around, that is).

I'm always open potential patches.  If we can make things faster without
taking up large chunks of extra memory (for the normal case), I'm
certainly for it.  Obviously the current code does not handle well the
case where a too-small block-size over-populates the hash table, so
having the software flex and put more memory into making an atypical
case work faster would be fine -- I just don't want to see the memory
requirements for the normally-chosen block-size go up all that much.

It would be a good idea to measure some of the things that we're
changing, such as how evenness the current distribution of weak
checksums is in the hash table, how fast/slow lookups are for certain
block counts, and how well an improved algorithm compares.


More information about the rsync mailing list