Rsync with multiple huge filesystems

Leaw, Chern Jian chern.jian.leaw at
Tue Sep 10 11:34:25 EST 2002


I invoked several rsync processes simultaneously. The rsync code reads the
from a file each time it's invoked. The file read into the script contains
lists of filesystems to be sync-ed from client machine to the NFS
fileserver. Both the client machine and the NFS fileserver are on separate
NIS domain, and have been made to trust one another. The 2 NIS domains are
also on separate geographical locations.

However, it looks like the rsync process may have caused the local disk on
the NFS fileserver to hit a 100% capacity. The filesystems to be copied from
the client machine has already been mounted on the fileserver. Below is the
rsync code:

for i in `cat datafile.txt`
   echo rsync -avz --dry-run --rsync-path=/usr/bin/rsync --delete $i$i   
   /usr/intel/bin/rsync -av --rsync-path=/usr/bin/rsync --delete $i$i

#cat datafile.txt
.... (list continues...) 

Each text file to be read from the script contains about 8 - 10 filesystem
entries to be sync-ed. There are 5 of such text files which serves as input
to the script. Each  filesystems are between 10GB-15GB in size, and they
have been specified in the input text file with the absolute paths. Hence
I've invoked 5 rsync processes. The network bandwith is approximately

These filesystems were mounted as follows on both client and the NFS server:

After synchronizing the filesystems to the NFS fileserver, I discovered
duplicated copies of  the copied filesystems with one of them mounted on /.
# cd /f1
#ls -l
drwxrws--- .. root   engineering  .... my_schematics
drwxr-xr-x  .. root   system       .... my_schmatics

Noticed that the group ownership and permission differs. I had invoked as
root and belonging to the engineering group. 

# cd f1
# df -k  *
/dev/vg32lv02 17670144 ...   /f1/my_schematics
/dev/hda4 393216 ...      /      (mounted on /)

I wasn't sure which of the 2 that I should remove. Hence I did :
# cd f1
#ls -lafd * 
# cd my_schematics^D
my_schematics/ my_schematics^M/
I had hence  removed the my_schematics^M/ by doing a rm -rf my_schematics?.
My question is:
This problem only occured for a few filesystems copied from the file lists,
and not all filesystems copied had caused the / be 100% full. 
1) Is there a limit to the number of processes which rsync can handle? Or is
the maximum number of channels which rsync is capable of handling? 
2) The rsync version on the client machine is a version 1.7.1 protocol
version 17. The NFS server has version 2.4.4 protocol version 24. Are they
3) As some filesystems copied from the source machine were also duplicated
and  did have an additional "?" appended at the end of its name, could rsync
differentiate which filesystems to be sync-ed? Some filesystems copied from
the source machine were not duplicated and did not contain the additional
"?", but when copied over, had produced a duplicate copy with the hidden
character appended at the end of its name. This made the trouble shooting
task more ambiguous. 
Could someone kindly help me out?



More information about the rsync mailing list