[clug] Anyone using 'snappy', Google's fast compression?

David Austin david at d-austin.net
Thu May 12 00:33:43 MDT 2011


On Thu, May 12, 2011 at 4:26 PM, Mike Carden <mike.carden at gmail.com> wrote:

> > led me to this GOOG project:
> > <http://code.google.com/p/snappy/>
>
> Well I hadn't heard of it and it looks interesting.
>
> Slightly tangentially, I was reading today about the stream
> compression employed by LTO 5 tape libraries. It grabs a data block,
> caches it then has a stab at compressing it. Then it compares the
> compressed block to the original and writes out the smaller of the two
> to tape - avoiding the trap of making the data bigger if it was
> already compressed or is incompressible.
>
> This is probably old hat to anyone who has worked with the guts of
> compression implementations before, but I was struck by its simplicity
> and usefulness.
>

Note that the tape still has to store an extra bit per block to record
if the compressed or original block is stored.  Thus, it can require
small increases in size.

There exists no lossless compression algorithm that can reduce
the size of all inputs (if it did exist, run it on the output again and
again).

David


More information about the linux mailing list