Multiples rsyncs with multiple sshs...CPU overload

tim.conway at philips.com tim.conway at philips.com
Wed Oct 31 05:54:51 EST 2001


You're in luck, Mr. Walters.  If you're already using ssh, put up a rsyncd 
on the dmz machine, limit it to 127.0.0.1, and use ssh port redirection to 
make a port inside be an access to the rsyncd port on the dmz side.  one 
ssh persists, used by many, possibly concurrent, rsync sessions.the same 
ssh can remain interactive, and used as a pipe to do your remote commands.


Tim Conway
tim.conway at philips.com
303.682.4917
Philips Semiconductor - Longmont TC
1880 Industrial Circle, Suite D
Longmont, CO 80501
Available via SameTime Connect within Philips, n9hmg on AIM
perl -e 'print pack(nnnnnnnnnnnn, 
19061,29556,8289,28271,29800,25970,8304,25970,27680,26721,25451,25970), 
".\n" '
"There are some who call me.... Tim?"




"Waters, Michael R [IT]" <michael.r.waters at ssmb.com>
Sent by: rsync-admin at lists.samba.org
10/30/2001 11:31 AM

 
        To:     "'rsync at lists.samba.org'" <rsync at lists.samba.org>
        cc:     (bcc: Tim Conway/LMT/SC/PHILIPS)
        Subject:        Multiples rsyncs with multiple sshs...CPU overload
        Classification: 



Hello Folks,

I am using rsync 2.4.6 over ssh on Solaris 2.6 machines.

It's been working great for months keeping three DMZ ftp servers in
sync...now, though,  I am trying to implement a new solution with DMZ and
"inside" ftp servers.

Basically, I want to sync files being ftp'ed to the DMZ server over to an
"inside" machine, and since some processing (decryption) then occurs, I 
need
to also send the last line of the file transfer log so it knows it needs 
to
do something (another process checks for new entries to the log).  I need 
to
use ssh, because rsh is not permitted in our environment (so I understand
rsync server is not an option).

This all works fine with the nifty bit of running a command on the remote
machine via the rsync call.   Problem is it works for one transfer, and 
even
a few.  When I try to do a stress test though, averaging 10 transfers a
minute, it kills the CPU to the point where some things are never 
completed.

I know this is on account of running an ssh session for each new file
transfer.  On account of the second part of sending the last entry of the
file transfer log, though, the situation doesn't really lend itself to 
doing
an rsync every five minutes or so.   So, I am wondering if there is a way 
to
open up a *single* ssh session and have *all* rsyncs use that "persistent"
pipeline for all rsyncs between the DMZ and inside server, instead of a 
new
ssh each time.  From what I have read, I am pessimistic, but I figured it
can't hurt to ask.

If not, I'll have to work out something with the file transfer log, but it
sure would be great to get this working...this has greatly improved our
redundant capabilities...

Thanks for any suggestions.

Mike




-------------- next part --------------
HTML attachment scrubbed and removed


More information about the rsync mailing list