Improving the rsync protocol (RE: Rsync dies)

Neil Schellenberger nschellenberger at orchestream.com
Wed May 22 07:38:01 EST 2002


>>>>> "Dave" == Dave Dykstra <dwd at bell-labs.com> writes:

    Dave> On Fri, May 17, 2002 at 01:42:31PM -0700, Wayne Davison
    Dave> wrote:
    >> On Fri, 17 May 2002, Allen, John L. wrote:
    >> > In my humble opinion, this problem with rsync growing a huge
    >> > memory footprint when large numbers of files are involved
    >> > should be #1 on the list of things to fix.
    >>
    >> I have certainly been interested in working on this issue.  I
    >> think it might be time to implement a new algorithm, one that
    >> would let us correct a number of flaws that have shown up in
    >> the current approach.
    >>
    >> Toward this end, I've been thinking about adding a 2nd process
    >> on the sending side and hooking things up in a different
    >> manner:

    Dave> I do shudder when I read about Martin's plans for a complete
    Dave> redesign because I have a lot of doubts about how it will
    Dave> affect performance.  The only reason that Rsync is as
    Dave> popular as it is today is because of its performance, and if
    Dave> it gets significantly worse people simply won't use it.  I
    Dave> think an evolution of the current code base is much more
    Dave> likely to be able to keep the good performance than a
    Dave> complete redesign.

This is going to need an up-front disclaimer: I use rsync every day
for non-trivial purposes.  It is like a chainsaw: very powerful, but
you must respect it or you WILL lose an appendage.  I have the utmost
respect for Dave, Martin, Tridge, et al. and I understand that
maintainers have a difficult job to do - not only maintain the
software but also act as "gatekeepers" for new ideas and features.

That having been said, I do have to point out that "fast" is only
useful if the results are correct.

At the moment I have to break up large jobs into smaller ones to have
any hope of them completing on any given run.  This has nothing to do
with memory footprint - I just get various protocol errors or hangs.
(This is a simple, if high volume, non-ssh solaris8-to-solaris8 rsync
across a WAN to a daemon.  It has been a persistent problem across all
2.4.x and 2.5.x versions.)

The new reports of corrupted files also send a cold chill down my
spine....  (Although I do recognise that it is too early to blame
rsync.)  What's worse, in my case I don't have shell access to the
source machine to do checksums on the hundreds of thousands of files
(about 150G of data).  Yikes!

I submit that no matter how fast the current rsync is or becomes,
there is the risk that it will be abandoned if it should ever gain the
widespread impression that it is unreliable.  Hangs, cores, and
protocol errors are one thing - and many of us are prepared to tough
those out for the myriad benefits that rsync DOES offer.  Corrupt
files and unreliable transfers requiring a non-deterministic number of
passes to correct would be a totally different kettle of fish.

I also agree with Dave: reimplementations rarely live up to their
promise.  But, as Martin and Wayne and others have pointed out,
although very bandwidth efficient, the existing implementation leaves
something to be desired in terms of its memory, disk and (to a lesser
degree) cpu performance....  And I think that those of us who have
spelunked through the guts of rsync trying to fix various hang bugs
can all agree that the current I/O implementation is a little
convoluted.  (And, despite heroic patching efforts, it still hangs.)

For some time now I have been thinking that some standardised pipeline
breakdown would be a good idea.  [Oddly, while I was drafting this, JW
Schultz's post came in and said exactly the same thing I was going to
say about process pipelines and leveraging common ABIs.  No, really!
Anyway, I won't repeat it.]  The motivation is to isolate
functionality into simple(r), more focussed, more easily verifiable
components with fewer interdependencies.  (High cohesion, low
coupling.)  This ought, in turn, to encourage more third party
auditing and implementations of the various components.

For example, one could easily imagine a variety of different
implementations of "scanner/generator" processes feeding a standard
"sender" (ala "find | cpio").  This type of approach would lend itself
beautifully to the implementation of some of the features people are
clamouring for:

  o  Include/Exclude ideas of various kinds simply become scanner
     implementations.  The simplest scanner is just a "ls" or perhaps
     a "find" command. The most complicated need only be limited by
     imagination.  Better still, it would then be up to the
     implementors, rather than the core maintainers, to document and
     explain them.  Endlessly.

  o  Phil's need for checksumming could be handled by a scanner which
     maintains a persistent data store of inodes, paths, checksums
     etc.  Perhaps a tie-in/integration with Tripwire, AIDE, or mhash?
     This might also be generally useful as a performance boost for
     those with large, relatively static, trees.  (Like me. :-)

  o  Jos's batch ideas could be implemented simply as a capture of the
     generator output to be played back into the sender.  If we could
     fix it so that the file could be played back in parts, we could
     take some of the sting out of protocol errors two hours into a
     big rsync.

  o  Perhaps the existing over-the-wire protocol could be emulated
     using a special "adaptor" process at the end of the sender
     pipeline?  I don't know how feasible it would be, but it would
     allow for backward OTW compatibility with older clients while
     allowing progress in the core.

To address the "Big Bang" problem, perhaps existing code could be
reused to provide intial implementations of each of the components?
Refactoring anyone?  (Have I used enough Software Engineering
buzzwords yet or what?  Ack.)

So, the overall goal would be to increase reliability and to encourage
third party auditing/reimplementation by providing (conceptually)
smaller, simpler, and more focused pipeline stages.

Many small tools!  Many small tools!  Many small tools!

[Thwack.  Ooof.  Sorry.  I feel much better now.]

Basically the less stuff in the core, the better the odds of being
able to get it working properly.


Regards,
Neil

P.S.  JW Schulz's and Bob Bagwill's posts came in while I was writing
this.  Some weird synergy thing going on out there....

-- 
Neil Schellenberger          | Voice : (613) 599-2300 ext. 8445
Orchestream Americas Corp.   | Fax   : (613) 599-2330
350 Terry Fox Drive          | E-Mail: nschellenberger at orchestream.com
Kanata ON, Canada, K2K 2W5   | URL   : http://www.orchestream.com/




More information about the rsync mailing list