Problem with checksum failing on large files

Terry Reed twreed at leapwireless.com
Mon Oct 14 14:50:00 EST 2002


> -----Original Message-----
> From: Derek Simkowiak [mailto:dereks at itsite.com] 
> Sent: Saturday, October 12, 2002 2:14 PM
> To: Craig Barratt
> Cc: Terry Reed; Donovan Baarda; 'rsync at lists.samba.org'
> Subject: Re: Problem with checksum failing on large files 
> 
> 
> > My theory is that this is expected behavior given the check 
> sum size.
> 
>      Craig,
> 	Excellent analysis!
> 
> 	Assuming your hypothesis is correct, I like the 
> adaptive checksum idea.  But how much extra processor 
> overhead is there with a larger checksum bit size?  Is it 
> worth the extra code and testing to use an adaptive algorithm?
> 
> 	I'd be more inclined to say "This ain't the 90's 
> anymore", realize that overall filesizes have increased (MP3, 
> MS-Office, CD-R .iso, and DV) and that people are moving from 
> dialup to DSL/Cable, and then make either the default (a) 
> initial checksum size, or (b) block size, a bit larger.
> 
> 	Terry, can you try his test (and also the -c option) 
> and post results?
> 

I tried "--block-size=4096" & "-c --block-size=4096" on 2 files (2.35 GB &
2.71 GB) & still had the same problem - rsync still needed to do a second
pass to successfully complete. These tests were between Solaris client & AIX
server (both running rsync 2.5.5). 

As I mentioned in a previous note, a 900 MB file worked fine with just "-c"
(but required "-c" to work on the first pass).

I'm willing to try the "fixed md4sum implementation", what do I need for
this?

I cannot try these tests on a Win32 machine because Cygwin does not support
files > 2 GB & I could only find rsync as part of Cygwin.  I don't have the
time nor the patience to try to get rsync to compile using MS VC++ :-)  Is
there a Win32 version of rsync with large file support available?  I do not
have any Linux boxes available to test large files.

Thanks.

--
Terry



More information about the rsync mailing list