Problem with checksum failing on large files

Craig Barratt craig at atheros.com
Mon Oct 14 16:48:01 EST 2002


> I tried "--block-size=4096" & "-c --block-size=4096" on 2 files (2.35 GB &
> 2.71 GB) & still had the same problem - rsync still needed to do a second
> pass to successfully complete. These tests were between Solaris client & AIX
> server (both running rsync 2.5.5). 

Yes, for 2.35GB there is a 92% chance, on average, that it will fail
with 4096 byte blocks.

> As I mentioned in a previous note, a 900 MB file worked fine with just "-c"
> (but required "-c" to work on the first pass).
> 
> I'm willing to try the "fixed md4sum implementation", what do I need for
> this?

The "fixed md4sum" refers to some minor tweaks for block lengths of
64*n, plus files bigger than 512MB, to get correct md4 sums. But this
shouldn't make a difference for you.

Would you mind trying the following?  Build a new rsync (on both
sides, of course) with the initial csum_length set to, say 4,
instead of 2?  You will need to change it in two places in
checksum.c; an untested patch is below.  Note that this test
version is not compatible with standard rsync, so be sure to
remove the executables once you try them.

Craig

--- checksum.c  1999-10-25 15:04:09.000000000 -0700
+++ checksum.c.new      2002-10-14 09:40:34.000000000 -0700
@@ -19,7 +19,7 @@

 #include "rsync.h"

-int csum_length=2; /* initial value */
+int csum_length=4; /* initial value */

 #define CSUM_CHUNK 64

@@ -120,7 +120,7 @@
 void checksum_init(void)
 {
   if (remote_version >= 14)
-    csum_length = 2; /* adaptive */
+    csum_length = 4; /* adaptive */
   else
     csum_length = SUM_LENGTH;
 }




More information about the rsync mailing list