out of memory in receive_file_entry rsync-2.6.2

Widyono widyono at seas.upenn.edu
Tue Aug 10 18:55:10 GMT 2004


On Tue, Aug 10, 2004 at 11:36:26AM -0700, James Bagley Jr wrote:
> Hello,
> 
> I've had some problems using rsync to transfer directories with more than
> 3 million files.  Here's the error message from rsync:
>
> ERROR: out of memory in receive_file_entry
> rsync error: error allocating core memory buffers (code 22) at util.c(116)


I've rsync'ed 7413719 files successfully (2.6.2 on RH7.2 server, Fermi
SL3.0.1 client), 680GB.  When I do the top level directory all at
once, there are several points where it locks up the server for
minutes at a time (directories with large #'s of files, it seems, and
I suppose it's an ext3 issue).

The server side hit 1GB of memory near its peak, and the client side
hits 540MB of memory.  Ick.  At least when I upgraded to 2.6.2, it was
possible to do this at all (compared with the version provided by
RedHat's RPMs).

For future sanity, I'm subdividing the top level directory into
several discrete rsyncs on subdirectories.  I like your idea in
general (though I agree it's ugly) for dynamicly addressing this
issue, but for now I can afford the luxury of manually subdividing the
tree.

Regards,
Dan W.

Better ideas?  No.  However, my suggestion would be to run a nightly
script on the *server* side (if you have access) which counts files,
and puts tallies in selected higher-level directories.  So,
e.g. /.filecount would have # of files in /tmp, /usr, /var, etc.
/usr/local/src/.filecount would have # of files in all its subdirs.
This prevents you from ssh'ing in and find'ing so many times.
Depending on how dynamic your disk utilization is, you could just make
this a weekly or monthly analysis.


More information about the rsync mailing list