not always making hard links?

Paul Slootman paul at debian.org
Fri Sep 10 09:35:54 GMT 2004


I'm using 2.6.3pre1 to transfer a rather large Debian archive
(126GB, more than 30 million files). It contains about 450 daily
snapshots, where unchanged files are hardlinked between the snapshots
(so many files have hunderds of links).

It's been running for some time now, and I found that while it's far
from done, it's already used 165GB on the receiving end. Investigation
shows that hardlinked files are no longer hardlinked...
Any ideas why?
Debugging may prove a bit problematic, as it takes a _long_ time to read
the list of files... However, I'm willing to try, if anyone has any
suggestions.

The receiving end is an rsync daemon, using the following rsyncd.conf:

pid file=/var/run/rsyncd.pid

[pub]
	comment = public archive
	path = /extra/pub
	use chroot = yes
	max connections=1
	lock file = /var/lock/rsyncd
	read only = no
	list = yes
	uid = mirror
	gid = nogroup
	strict modes = yes
	hosts allow = 192.168.1.1
	ignore errors = no
	ignore nonreadable = yes
	transfer logging = no
	timeout = 6000
	refuse options = checksum dry-run
	dont compress = *.gz *.tgz *.zip *.z *.rpm *.deb *.iso *.bz2 *.tbz


The sender is the following command line:

    rsync -avz --bwlimit=5000 /extra/pub/ 192.168.1.73::pub

Virtual memory usage is 2776MB :-)
Another thing that I noticed: I first messed the command a couple of
times, and hit ctrl-C. However, the forked process on the server side
continued running, meaning that I could not reconnect without first
manually killing that process (because of the max connections=1).
Strace showed that the process was doing a select() on a socket, which
presumably was formerly connected to the second rsync process, which
_did_ go away when the client was killed.


Paul Slootman


More information about the rsync mailing list