Long time needed for "Building file list" Any suggestions ?
jim at jrssystems.net
Mon Mar 22 23:19:43 GMT 2004
> This does bring up one point though. Is there any way to optimize file
> list building? It seems like that turns into a huge bottleneck in the
> "lots of files" situation.
If you already know you're working with a mirror on the other end, and
you know when your last sync was, and you're a moderately decent Perl
hacker, you can pretty easily hack together a script that will take the
output of something like
find / -ctime -1h
and use it to just do a straight copyover of all files that have been
modified on the primary machine since the last synchronization.
For reference, on several servers I admin with anywhere from 60GB to
200GB worth of data on them, it takes less than 5 seconds to generate a
list of changed files using the find command as shown above, under most
server load conditions. (Also for reference this is with various
versions of FreeBSD from 4.9 to 5.1.)
What that *won't* do is get rid of any files that have been deleted
since the last time you sync'ed. But to for instance a for instance, I
sometimes use a little Perl hack like that to bounce the major changes
frequently during the day, then use rsync once daily during downtime
around 1AM to catch anything my "bounces" missed (like deleting files).
More information about the rsync