DO NOT REPLY [Bug 4768] problem sync big filesystem over slow connection

samba-bugs at samba-bugs at
Fri Jul 13 21:12:39 GMT 2007

------- Comment #7 from dieter.ferdinand at  2007-07-13 16:12 CST -------
i use this command-line to sync the backup:
rsync -l -H -r -t -D -p -v -P --bwlimit=10 -z --delete-after --force
--timeout=300 --exclude-from=/cron/backup_server.not --partial
--link-dest=/backup/server/master server::server/ /backup/server/13/

normaly, this works fine. but i must reinstall the server and somebody put to
much data at one day to the server, so my backup is out of sync and i must
resync it.

at the moment, i start the program every time, it is finished to get the rest
of data to my backup-system.

rsync works fine for this function and the backups on local systems make noc
problem. with a 100Mbit-network, the syncronisation is fast enough to copy all
data in one day.

but the remote-system must sync over internet with max 10 kBytes/s and
datacompression and is very slow if much data must be copied.
and this makes sometimes problems if an user think, he must make a backup of a
directory with 2 gb of files on the server.

after more then 2 weeks, i have a syncronisation from 92% of the files and i
think, that i need over 10 days for the rest.

i hope, you change your mind and insert an option in future, which make it
easy, to add every time, the programs runs, the next part of data without risk,
to loos an existing part while program is terminated before all data is copied.

an option, which tell rsync only make hardlinks and don't copy any data can
also help to solve this problem.
then i can make all hardlinks at first run and copy all missing data at second


Configure bugmail:
------- You are receiving this mail because: -------
You are the QA contact for the bug, or are watching the QA contact.

More information about the rsync mailing list