Any work-around for very large number of files yet?

tim.conway at philips.com tim.conway at philips.com
Mon Oct 21 17:53:23 EST 2002


Mark:  You are S.O.L.  There's been a lot of discussion on the subject, 
and so far, the only answer is faster machines with more memory.  For my 
own application, I have had to write my own system, which can be best 
described as find, sort, diff, grep, cut, tar, gzip.  It's a bit more 
complicated than that, and the find, sort, diff, grep, and cut are 
implemented in perl code.  It also gets to use some assumptions I can make 
about our data, concerning file naming, dating, and sizing, and has no 
replacement for rsync's main magic, the incremental update of a file. 
Nonetheless, a similar approach might do well for you, as chances are, 
most of your changes are the addition and removal of files, with changes 
to existing files always entailing a change in size and/or timestamp.

Tim Conway
conway.tim at sphlihp.com reorder name and reverse domain
303.682.4917 office, 303.921.0301 cell
Philips Semiconductor - Longmont TC
1880 Industrial Circle, Suite D
Longmont, CO 80501
Available via SameTime Connect within Philips, caesupport2 on AIM
"There are some who call me.... Tim?"




"Crowder, Mark" <m-crowder at ti.com>
Sent by: rsync-admin at lists.samba.org
10/21/2002 08:37 AM

 
        To:     rsync at lists.samba.org
        cc:     (bcc: Tim Conway/LMT/SC/PHILIPS)
        Subject:        Any work-around for very large number of files yet?
        Classification: 



Yes, I've read the FAQ, just hoping for a boon... 
I'm in the process of relocating a large amount of data from one nfs 
server to another (Network Appliance filers).  The process I've been using 
is to nfs mount both source and destination to a server (solaris8) and 
simply use rsync -a /source/ /dest .   It works great except for the few 
that have > 10 million files.   On these I get the following:
ERROR: out of memory in make_file 
rsync error: error allocating core memory buffers (code 22) at util.c(232) 
It takes days to resync these after the cutover with tar, rather than the 
few hours it would take with rsync -- this is making for some angry users. 
 If anyone has a work-around, I'd very much appreciate it.
Thanks, 
Mark Crowder 
Texas Instruments, KFAB Computer Engineering 
email: m-crowder at ti.com 





More information about the rsync mailing list