4TB and "150 000 000" files out of memory error

Matt McCutchen matt at mattmccutchen.net
Tue Jan 15 17:32:48 GMT 2008


On Tue, 2008-01-15 at 09:55 +0100, Sylvain Gargasson wrote:
> I work in a production and I not very happy from use pre-release version...
> 
> I have use FAQ and calculate I need 15GB of RAM or SWAP...
> 
> I can create big swap without problem but my client use Samba for share is 150 Millions files and if I use all RAM I can have performance impact...
> 
> Do you know if I can use some command to directly use SWAP memory for rsync???

No.  If Linux had a ulimit for the amount of data a process can have in
RAM at once, that would be perfect, but unfortunately it doesn't.

Another option is to split the copy into smaller portions, each of which
can be done within the memory limit.  For example, if the size of your
source directory is split relatively evenly among a number of immediate
subdirectories, you could write a script to iterate over the source
subdirectories and copy each to the corresponding destination
subdirectory using a separate rsync run (possibly with an additional run
to do top-level deletions).

Matt



More information about the rsync mailing list