rsync speed on slow wireless links
lwarxx at gmail.com
Mon Dec 14 01:18:35 MST 2009
On Mon, Dec 14, 2009 at 02:23:26AM -0500, Matt McCutchen wrote:
> On Thu, 2009-12-10 at 13:08 +0700, Max Arnold wrote:
> > I've noticed that rsync performs significantly worse than wget on slow congested wireless
> > links (GPRS in my case). I don't have large statistics, but in my tests rsync often stalls
> > for 3-5 minutes, while wget stalls only for several seconds and then continues download.
> Rsync isn't doing anything fancy that would cause it to be especially
> affected by packet loss or delay. The protocol takes a few round trips
> to set up, and then it is largely pipelined, so rsync can tolerate some
> amount of latency without slowing down the whole process.
Can size of this initial metadata be approximately calculated? I plan to experiment with
different timeout values to find a balance between link utilization (by preventively aborting
long stalls) and traffic consumption (by not retrying very often to prevent metadata overhead
> I can't explain why rsync would stall much longer than wget. The only
> thought I had is that the network might have a quality-of-service policy
> that favors port 80.
No, it probably haven't, because I've also tried to use OpenVPN which hides internal traffic
from inspection and symptoms are the same.
Is someone here have any experience with different Linux tcp congestion control algorithms
suitable for cellular networks? What about adjusting send/receive buffers? It seems that low
level radio link protocols and equipment do heavy buffering, which reduces packet loss but
introduces unpredictable delays (up to tens of seconds). I.e. link equipment knows that there
is no radio resources available and buffers packets (maybe even does retransmissions if
necessary). And this probably fools tcp protocol which thinks there is congestion and packet
Thanks for replying!
More information about the rsync