Feature Request: Multiple Streams

Tim Conway conway at us.ibm.com
Wed Mar 10 15:09:08 GMT 2004


Oh, that.  There was a lot of talk about it, but it hasn't happened within 
rsync.  I ended up writing my own client-server model utility in perl.  We 
had a master copy of a distribution of EDI tools and views - 170GB or so 
in a couple million files, as I recall, and we had to keep up around 20 
identical copies of it all around the world.
I didn't dream of trying to implement the rsync algorithm.  It worked 
strictly on timestamps, sizes, and filetypes, comparing the list from the 
master with the list from the replica.  It made a very lightweight process 
for each replica, and the generation of the list for the master was done 
only once per sync.
If your changes are mostly new files instead of small changes in large 
files, it might be what you need.  If Tim Renwick is still monitoring this 
list, maybe he could tar it up and pass you a copy.  It'd definitely 
require some porting for a new environment, unless you're replicating a 
Maxtor MaxAttach 4000 to others like it, using a Solaris box to handle the 
master replication tasks.  Fortunately, it's commented out the wazoo, so 
to speak(which made it relatively painless for Philips to lay me off).

> It would be nice to have it read the data once, and then sync it to all
> of the destinations once. IIRC, there was a move to do this at some
> point. Am I right?

Tim Conway
Unix System Administration
Contractor - IBM Global Services
conway at us.ibm.com




More information about the rsync mailing list