Problem with checksum failing on large files

Donovan Baarda abo at
Mon Oct 14 16:26:01 EST 2002

On Mon, Oct 14, 2002 at 06:22:36AM -0700, jw schultz wrote:
> On Mon, Oct 14, 2002 at 10:45:44PM +1000, Donovan Baarda wrote:
> > Does the first pass signature block checksum really only use 2 bytes of the
> > md4sum? That seems pretty damn small to me. For 100M~1G you need at least
> > 56bits, for 1G~10G you need 64bits. If you go above 10G you need more than
> > 64bits, but you should probably increase the block size as well/instead.
> It is worth remembering that increasing the block size with
> a fixed checksum size increases the likelihood of two
> unequal blocks having the same checksums.

I haven't done the maths, but I think the difference this makes is
negiligable, and is far outweighed by the fact that a larger block size
means less blocks.

> I think we want both the block and checksum sizes to
> increase with file size.  Just increasing block size gains
> diminishing returns but just increasing checksum size will
> cause a non-linear increase in bandwidth requirement.
> Increasing both in tandem is appropriate.  Larger files call
> for larger blocks and larger blocks deserve larger
> checksums.
> I do think we want a ceiling on block size unless we layer
> the algorithm.  The idea of transmitting 300K because a 4K
> block in a 2GB DB file was modified is unsettling.

I think that command-line overides are critical. Just as you can force a
blocksize, you should be able to force a sigsumsize. However the defaults
should be reasonable.

BTW, the suggestions of using -c earlier in this thread ... from my reading
of the man page, -c will only help if the files happen to be identical... 

And the reason rsync does a second pass that succeeds is it transmits the
whole file if a final checksum of the file doesn't match after using the
rsync algorithm. 

Someone else suggested using a locality limit to minimize the candidate
blocks, and hence minimize the chance of errors... I think a locality
heuristic might be good for speeding up finding a match, but a locality
limit is no good because it sacrifices the chance of finding distant
matches. part of rsync's benefits is it can handle relocated data, and a
locality limit breaks that.

> Note for rsync2 or superlifter:
> 	We may want to layer the algorithm so that large
> 	files get a first pass with large blocks but
> 	modified blocks are accomplished with a second pass
> 	using smaller blocks.
> 	ie. 2GB file is checked with 500KB blocks and a
> 	500KB block that changed is checked with 700B
> 	so rsyncing the file would be almost like rsyncing
> 	a directory with -c.

layer algorithms with variable block sizes are tricky. At first glance it
seems like a good idea, but when you go to implement it, it gets all messy.
The big problem is "miss" data is not block aligned, and to handle
re-location of data, you need to pretty much transmit a small blocksize
signature of the whole file... by the time you do that you might as well
have just used a smaller blocksize.

Every time I think of layer/variable block tricks I think of another way
that it might work, but every time I persue them, they hit problems...

Another one that just sprang to mind; combining it with a client-side delta

1) "client" sets blocksize to something large.
2) "client" requests signature from server for unknown parts of target file
  (ie the whole file on first parse, large unknown blocks on subsiquent passes)
3) "server" sends requested signature.
4) "client" rolls through basis, searching for match data, and populates
   a sparse new file with the hits.
5) "client" reduces blocksize (by half? by quarter?).
6) while blocksize is larger than some minimum, goto 2)
7) "client" requests remaining missing blocks and completes sparse new

This could work... it could also have serious problems when you go to
implement it...

ABO: finger abo at for more info, including pgp key

More information about the rsync mailing list