Rsync stops randomly (using vanished files bash script)

Andre Majorel aym-cnysr at teaser.fr
Fri Aug 24 11:15:39 GMT 2007


On 2007-08-24 12:02 +0200, Samuel Vogel wrote:
> Andre Majorel schrieb:
>
> >If you have directories with many files, rsync becomes a memory
> >hog. I've had it bring down a 1-GB machine while copying a 10-GB
> >news spool.
>
> Thanks for the reply. Do you mean on the rsync server or client
> side?

I don't recall from which machine the transfer was initiated but I
think the side that hogs memory is the one where the large
directory exists. So the first time you run rsync, it's just on
the source host. On subsequent runs, it's on both hosts.

> I'm sure that there is no file bigger than 500MB. And there
> can't be any files 1GB, because we have 1GB quotas in place!

IME, rsync has no problems with large files or even many files.
What it can't deal with is single directories with many files.
The algorithm used must require holding in memory some per-file
data for *all* files in the directory being synced.

If you have a news spool, a big maildir folder or some such
monstrosity, you might want to try excluding them from the sync
and see if it makes a difference.

Good luck.

-- 
André Majorel <URL:http://www.teaser.fr/~amajorel/>
Do not use this account for regular correspondence.
See the URL above for contact information.


More information about the rsync mailing list