timeout on large file sets
Phil Glatz
phil at glatz.com
Wed Nov 13 19:39:00 EST 2002
I'm getting "unexpected EOF in read_timeout" when dealing with large
collections of files.
The root path is /files, which has 1.4GB of data in 483260 files in 5328
subdirectories.
I tried setting "--timeout=600" as a test, but it is still timing out after
about 30 seconds. Shouldn't this do it? Or am I possibly having network
connectivity issues that may be causing the timeout? (as far as I know my
connection is ok).
One of them (/files/logs) contains about quarter of the data, so I broke it
out as a separate setup, and it transfers fine. I haven't found a great
deal of documentation on exclusion rules, so I set up a separate section in
rsyncd.conf, with the path=/files/logs
I then tried to sync the rest of the data, with the rsyncd.conf path set to
/files, and the rsync command line options including --exclude logs/
A test on a smaller file set indicated that even though I'm excluding
logs/, the files in it are getting counted in the "total number of files"
at the ending summary.
I would appreciate any suggestions on exclusion rules or other techniques
to deal with large data sets.
More information about the rsync
mailing list