Problem in using rsync

Tim Conway conway at
Thu Jun 17 19:17:28 GMT 2004

Classic.  I used to see that.  In mine, I finally had to give up, and 
wrote another tool... not rsync's fault.
I would get timeouts during file list builds.  As I recall, there's an 
internally-defined "SELECT_TIMEOUT", that, at least back then, remained at 
60 seconds, regardless of the commandline timeout.  Now that you've 
boosted the speed, your big runs are finishing their filelist builds and 
taking off on hard I/O usage, slowing the filelist build enough to exceed 
the SELECT_TIMEOUT.   Once the list is built, it's more robust.  Try 
running the other rsyncs niced.  They'll still burn like crazy, but the 
increased CPU demand of the big list run during its list build may hold 
them down enough to let it finish building.
Also, start the biglist run first, so its CPU will be busy.  If it's the 
one sharing, it'll be slowed.  10 seconds should suffice, if I'm right. If 
that doesn't do it, give it a bigger head-start, long enough to start the 
The easiest way to determine that timeis to run it once to completion, 
then run it again on unchanged data, and note the time for that operation. 
 That's much head-start it needs to be in transfer before the big dogs 
choke it.

Wayne'll probably correct my errors.  That SELECT_TIMEOUT thing has 
probably changed by now.  It's been a couple of years since I read and 
mentally traced the whole tree.  But, nicing is still a good bet, as is 
the head-start.  Your timeout is already pretty substantial.

Tim Conway
Unix System Administration
Contractor - IBM Global Services
conway at

Anh Truong <transfert.curateur at> 
Sent by: at
06/17/2004 08:11 AM

rsync at (Receipt Notification Requested)

Problem in using rsync


I use rsync to perform backup on disk on a SunFire 880 with Solaris 8. For 

performance issues, we launch simultaneously 5 rsyncs on 5 different 
and about 150-200 "cp -p" commands on as many database files. We have been 

using the same scripts for about 2 months, without problems. The backup is 

performed on the same server (from filesystem to filesystem on the same 

Last weekend, we replaced the 4X750MHz by 8X1200MHz CPU's and upgraded 
from 8 to 16 MB of RAM. Since then, we had 2 errors out of 3 backup runs. 
The error 
is always on the same filesystem, which is not the largest one but the one 
that has 
the more files and directories (400 000 files as opposed to 600 for 
others). The 
error message we have is:

io timeout after 600 seconds - exiting
rsync error: timeout in data send/receive (code 30) at io.c(143)
rsync: writefd_unbuffered failed to write 69 bytes: phase "unknown": 
Broken pipe
rsync error: error in rsync protocol data stream (code 12) at io.c(836)
The command used is:

OPTS="--delete --timeout=600 --exclude dbf/ --rsh=rsh"


In the documentation, it is said that this kind of error might be related 
to either:

Disk full = This is not the case
Remote rsync is not found = It is on the same server, so it is found
remote-shell setup isn't working right or isn't "clean" = I tried the 
suggested testing 
and there is no problem. Moreover, it worked before...

I saw also that the rsync process might have been starving for CPU or 
memory, In 
our case I do not think it might be the case.
Can you help me on this???

Thanks in advance

To unsubscribe or change options:
Before posting, read:

More information about the rsync mailing list