Problems while transferring big files

Matt McCutchen matt at
Sun Mar 8 14:10:56 GMT 2009

On Sun, 2009-03-08 at 14:48 +0200, Shachar Shemesh wrote:
> Wayne Davison wrote:
> >
> > We hypothesize that there can be an accidental match in the checksum
> > data, which would cause the two sides to put different streams of data
> > into their gzip compression algorithm, and eventually get out of sync
> > and blow up.  If you have a repeatable case of a new file overwriting an
> > existing file that always fails, and if you can share the files, make
> > them available somehow (e.g. put them on a web server) and send the list
> > (or me) an email on how to grab them, and we can run some tests.
> >
> > If the above is the cause of the error, running without -z should indeed
> > avoid the issue.
> >   
> If I understand the scenario you describe correctly, won't running 
> without -z will merely cause actual undetected data corruption?

No, rsync's post-transfer checksum will catch the corruption, and rsync
will redo the transfer.  IOW, rsync is designed to recover from false
block matches, except that false matches in a compressed transfer can
cause a fatal error by throwing the -z protocol out of sync.


More information about the rsync mailing list