ERROR: out of memory in receive_file_entry

James Bagley Jr jabagley at cvs.agilent.com
Thu Jul 22 20:47:02 GMT 2004


Hello,

I'm looking for some possible solutions to the out of memory problem when
dealing with very large directory trees.

Client:  linux-2.4.20
Server:  HP-UX 11.11
rsync version:  2.6.2
Directory size:  400Gbytes
number of files:  3273133
rsync cmd: rsync -avRx --progress --stats --numeric-ids --blocking-io --delete -e ssh hp-ux.server:/path /local/linux/

It seems to fail after consuming roughly 250M of memory and recieving a
little over 3mil files in the file list.  Both client and server have 1G
of RAM and 2G of swap.  Here is the error message from rsync:

<snip>
ERROR: out of memory in receive_file_entry
rsync error: error allocating core memory buffers (code 22) at util.c(116)
rsync: connection unexpectedly closed (63453387 bytes read so far)
rsync error: error in rsync protocol data stream (code 12) at io.c(342)
</snip>

It's hard to tell which system is running out of memory in this case.

I'd thought about trying 64bit-ness (we will have an AMD64 linux test box
pretty soon), but we are only talking about 250M so I can't imagine that
would help at all.  I might try it anyways when the box gets here.

The only real solution that I have been able to come up with is to divide
and conquer.  Running multiple rsync processes for this single filesystem
so rsync doesn't have to deal with 3mil+ files will succeed.  But, 400G is
a "small" filesystem in our environment and this solution will be very
difficult to implement on larger filesystems.  Any thoughts?

--
James Bagley			|           CDI Innovantage
jabagley at cvs.agilent.com	| Technical Computing UNIX Admin Support
   DON'T PANIC			|       Agilent Technologies IT
Phone: (541) 738-3340		|          Corvallis, Oregon
--


More information about the rsync mailing list