bug? -z option and large compressed data
jw at pegasys.ws
Fri Jul 4 11:48:58 EST 2003
On Thu, Jul 03, 2003 at 10:02:33AM -0700, Wayne Davison wrote:
> On Thu, Jul 03, 2003 at 05:25:28PM +0900, Yasuoka Masahiko wrote:
> > I'm Yasuoka Masahiko from Japan. I sent 2 messages about a bug on
> > token.c.
> Hi, I've been tracking your patches, but have not had much of a chance
> to look into this until today. Thanks for supplying patches with your
> bug report, BTW!
> > In addition to, tx_strm context keeps pending output. It must be
> > flushed here.
> It seems weird to me that the code was not flushing the output, but I am
> worried that changing this will make us incompatible with older rsync
> versions (since this data affects the compressor on each side without
> actually sending any of it over the socket). I also haven't seen any
> failures using Z_INSERT_ONLY instead of Z_SYNC_FLUSH. Did you encounter
> a failure case without flushing?
> > Please check below patch.
> I think I'd prefer a little simpler approach to fixing this. Here's a
> patch that expands the obuf to a larger size and just uses this larger
> size in this one part of the token compression code. This avoids the
> problem in your first patch where you affected too many things (by
> changing the value of MAX_DATA_COUNT) and avoids adding an extra loop
> as well. In my testing this fixed a compression failure when syncing
> a large iso.
I'm no expert on zlib, hence my silence on this. I much
perfer Wayne's approach on this. It was my first
I don't care for the liternal 128, particularly uncommented.
I had been concerned because of non-proportionality but
examination of zlib docs shows that the worst-case expansion
is anything but proportional. In fact 128 is probably
excessive given "The worst case expansion is a few bytes for
the gzip file header, plus 5 bytes every 32K block".
J.W. Schultz Pegasystems Technologies
email address: jw at pegasys.ws
Remember Cernan and Schmitt
More information about the rsync