Reliability and robustness problems

John rsync at computerdatasafe.com.au
Mon Jun 14 22:03:09 GMT 2004


Wayne Davison wrote:

>On Thu, Jun 10, 2004 at 07:21:41AM +0800, John wrote:
>  
>
>>flist.c: In function `send_file_entry':
>>flist.c:349: `lastdir_len' undeclared (first use in this function)
>>    
>>
>
>It patched the wrong function, which is really hard to understand
>because the line numbers in the patch are right for the 2.6.2 version of
>flist.c.  If you read the "@@" line before each hunk, you'll see the
>function name it should have patched.  The first hunk makes its change
>in receive_file_entry(), and the second makes its change in make_file().
>Both changes are simple enough that you can patch them by hand, if
>needed.
>  
>

Wayne suggested off-list to check whether the patch is already in place.

Grumble grumble.

I've installed  2.6.2 in both sites.

We've also discovered that Telstra has "improved" some configuration 
item in its DSLAMs or somewhere and this leads to a lack of reliability 
of the connexion. I've now made the necessary adjustment to the DSL-300 
(don't believe the dlink website, they all talk telnet on 192.168.1.1) 
and we live in hopes the DSL link will stay up for weeks instead of hours.

I've implemented some rudimentary performance monitoring: each hour I 
run these commands:
ifconfig tun0 | mail -s "Traffic report " summer at office.lan
ps ww u -C rsync | mail -s "rsync report" summer at office.lan

This shows me that we're not transferring enormous amounts of data (so I 
guess the hard-link problem's gone), but we're still using lots of memory:

USER       PID %CPU %MEM   VSZ  RSS TTY      STAT START   TIME COMMAND
root     15191  4.0 68.4 323900 131352 ?     S    01:23   6:22 rsync --recursive --links --hard-links --perms --owner --group --devices --times --sparse --one-file-system --rsh=/usr/bin/ssh --delete --delete-excluded --delete-after --max-delete=80 --relative --stats --numeric-ids --timeout=3600 /var/local/backups 192.168.0.1:/var/local/backups/


It doesn't seem to be doing a lot of paging. (Note I'm not at all sure 
that my understanding of paging is the same as is meant in Linux - I've 
seen systems reporting paging where there was no swap file, and my 
understanding of "paging" prohibits this).

The memory usage is a concern, not because I can't reduce it for this 
run - I've not yet made the refinements suggested, or implemented 
deleting old backups, but there are other systems that need to be backed 
up too.

It may be that there will be files from different systems that are 
identical - think system binaries, fonts etc.

If these are in /var/local/backups/{host1,host2} etc, and I've run a 
script to identify these dupes and eliminate them using hard links,  can 
rsync preserve these hard links even though it can't see them all?

If not, I'll simply run the script in all locations whenever I feel the 
need. This uncertainty on my part is the reason I'm exposing the whole 
backup directory hierarchy to rsync rather than parts of it.


More information about the rsync mailing list