Multiples rsyncs with multiple sshs...CPU overload

Waters, Michael R [IT] michael.r.waters at ssmb.com
Wed Oct 31 05:31:53 EST 2001


Hello Folks,

I am using rsync 2.4.6 over ssh on Solaris 2.6 machines.

It's been working great for months keeping three DMZ ftp servers in
sync...now, though,  I am trying to implement a new solution with DMZ and
"inside" ftp servers.

Basically, I want to sync files being ftp'ed to the DMZ server over to an
"inside" machine, and since some processing (decryption) then occurs, I need
to also send the last line of the file transfer log so it knows it needs to
do something (another process checks for new entries to the log).  I need to
use ssh, because rsh is not permitted in our environment (so I understand
rsync server is not an option).

This all works fine with the nifty bit of running a command on the remote
machine via the rsync call.   Problem is it works for one transfer, and even
a few.  When I try to do a stress test though, averaging 10 transfers a
minute, it kills the CPU to the point where some things are never completed.

I know this is on account of running an ssh session for each new file
transfer.  On account of the second part of sending the last entry of the
file transfer log, though, the situation doesn't really lend itself to doing
an rsync every five minutes or so.   So, I am wondering if there is a way to
open up a *single* ssh session and have *all* rsyncs use that "persistent"
pipeline for all rsyncs between the DMZ and inside server, instead of a new
ssh each time.  From what I have read, I am pessimistic, but I figured it
can't hurt to ask.

If not, I'll have to work out something with the file transfer log, but it
sure would be great to get this working...this has greatly improved our
redundant capabilities...

Thanks for any suggestions.

Mike





More information about the rsync mailing list