rsync scalability [Virus checked]

Jason Philbrook jp at
Wed Mar 2 15:05:35 GMT 2005

The size of the file won't be a problem. I backup many gigs every night.

If all 500 locations don't hit your server all at the same time, you may
be fine without any special work. Staggering the connections will also
conserve your bandwidth at the central site. I'd probably try it without
the -z compress option first to be easy on the server CPU. I'd probably
have a script to stop the rsyncd processes and restart the daemon once a
week to clear lingering rsyncd's left from interupted/broken

On Wed, Mar 02, 2005 at 03:26:11PM +0100, Stefan.Wiederoder at wrote:
> Hello rsync-users,
> I need to transfer an PC image-file (200 to 400 MB) to 500 locations, 
> which are
> equal as concurrent users on the day X.
> what parameters do you suggest to use on the central rsync-server?
> any recommendation for socket options?
> is it a good idea to split the file into several pieces, let´s say 10 MB ?
> Does this speed up the blocksum calculation or is it better to leave the 
> file in one piece?
> our rsync-server has 1 GB RAM - is this OK?
> bye
> ,
> Stefan
> -- 
> To unsubscribe or change options:
> Before posting, read:

Jason Philbrook   |   Midcoast Internet Solutions - Internet Access,
    KB1IOJ        |  Hosting, and TCP-IP Networks for Midcoast Maine   |   

More information about the rsync mailing list