Any work-around for very large number of files yet?
m-crowder at ti.com
Mon Oct 21 14:39:00 EST 2002
Yes, I've read the FAQ, just hoping for a boon...
I'm in the process of relocating a large amount of data from one nfs server
to another (Network Appliance filers). The process I've been using is to
nfs mount both source and destination to a server (solaris8) and simply use
rsync -a /source/ /dest . It works great except for the few that have > 10
million files. On these I get the following:
ERROR: out of memory in make_file
rsync error: error allocating core memory buffers (code 22) at util.c(232)
It takes days to resync these after the cutover with tar, rather than the
few hours it would take with rsync -- this is making for some angry users.
If anyone has a work-around, I'd very much appreciate it.
Texas Instruments, KFAB Computer Engineering
email: m-crowder at ti.com
-------------- next part --------------
HTML attachment scrubbed and removed
More information about the rsync