rsync scalability [Virus checked]

Jason Philbrook jp at saucer.midcoast.com
Wed Mar 2 15:05:35 GMT 2005


The size of the file won't be a problem. I backup many gigs every night.

If all 500 locations don't hit your server all at the same time, you may
be fine without any special work. Staggering the connections will also
conserve your bandwidth at the central site. I'd probably try it without
the -z compress option first to be easy on the server CPU. I'd probably
have a script to stop the rsyncd processes and restart the daemon once a
week to clear lingering rsyncd's left from interupted/broken
connections.

On Wed, Mar 02, 2005 at 03:26:11PM +0100, Stefan.Wiederoder at kaufland.de wrote:
> Hello rsync-users,
> 
> I need to transfer an PC image-file (200 to 400 MB) to 500 locations, 
> which are
> equal as concurrent users on the day X.
> 
> what parameters do you suggest to use on the central rsync-server?
> any recommendation for socket options?
> 
> is it a good idea to split the file into several pieces, let´s say 10 MB ?
> Does this speed up the blocksum calculation or is it better to leave the 
> file in one piece?
> 
> our rsync-server has 1 GB RAM - is this OK?
> 
> bye
> ,
> Stefan
> -- 
> To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
> Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

-- 
/*
Jason Philbrook   |   Midcoast Internet Solutions - Internet Access,
    KB1IOJ        |  Hosting, and TCP-IP Networks for Midcoast Maine
 http://f64.nu/   |             http://www.midcoast.com/
*/


More information about the rsync mailing list