[distcc] Benchmarking the server-side cache. How bad is 4ms latency to distcc server?

Dan Kegel dank at kegel.com
Sun Feb 5 22:44:36 GMT 2006

I'm benchmarking our spiffy new sever-side cache patch for distcc
with a largish real-world C++ test app.
The app has lots of source files which are about 2MB
preprocessed, which produce roughly 2MB .o files,
and which take about five seconds to compile.
The overhead of linking, preprocessing, and sending
bytes over the network is high enough that
even with a 100% hit rate, I'm only seeing a 10%
overall reduction in build time.  So now I'm looking at
the sources of overhead, starting with the network.

The time spent transferring the preprocessed
source is usually 400ms, but occasionally 1 or even 1.5 seconds.
This didn't matter so much when compiles took 5 seconds
anyway, but it's kind of awful when compiles take 0 seconds
(i.e. on a cache hit).

Examining the connection between client and server
using ping, I see that the round-trip time is usually 400
microseconds, but sometimes 4ms; the average is
about 1ms.

I guess I'll experiment with turning on compression next,
and look at a protocol change for large files to try sending
just the hash of the source first, and sending the full
source only if the server replies with 'sorry, not in the cache'.

But I thought I'd ask: just how awful is it to have a latency
that alternates betwen 0.4 and 4.0 ms, with an average of 1ms?
My guess is that it causes a significant overall penalty
compared to a uniform 0.4ms latency, but I haven't checked.
- Dan

Wine for Windows ISVs: http://kegel.com/wine/isv

More information about the distcc mailing list