rsync takes way too long to perform this....
marek at bmlv.gv.at
Thu Nov 15 17:13:09 EST 2001
>rsync -avnp remote::gif/ `find /home/www/html/ -maxdepth 1
>-name "*.[j,g][pg,if]*"` /tmp/
>If I run this on the local machine, the rsync server, it takes this
>---> root at server (0.34)# time find /home/www/html/ -maxdepth 1
>-name "*.[j,g][pg,if]*" -type f
>However if I run it from a client, it will take forever. Too much to
>run, it seems. Our directory structure has well over a million files.
>And this is just one directory under /home/www/html. We can't afford the
>cpu and system load to traverse everything, this is why I am using the
>find command. Shouldn't this work? It does come back with retrieving the
>list from the remote server.
What OS are you running on both systems?? AFAIK linux with ext2/ext3 has
(currently) severe problems with large directories (>5000 files).
[Work is done to avoid that: see ext2 directory index patch at
Maybe that's your problem.
(In my - and strictly my - opinion, a directory with that many files is
"unmaintainable". I'd do some partitioning - and if it's only sorting by
filetype (.html, .gif, .jpg, ...))
More information about the rsync