[distcc] SMP volunteers + compression

Martin Pool mbp at samba.org
Thu Jun 13 10:54:31 GMT 2002

On 11 Jun 2002, Marko Mikulicic <marko at seul.org> wrote:

> >Yes, using SMP machines more efficiently is important.  I would rather
> >avoid making each client know how many remote CPUs there are, on the
> >principle of (store that information) "once and only once".
> Do you mean that distccd should provide this information ?

Yes, I think so.  At the moment, I prefer the idea of making the
client just back off if a machine is too busy to accept a connection.
Alternatively, the server might send the client some information about
how many more jobs it can take.

>  Distccd could be started with a cmdline switch which tells him
> how many concurrent compilations should be started. I don't
> this calling it "processors" is correct, because althrough SMP
> systems have the major benefit from it.

Yes, I already have such a --tasks switch for distccd.

> I was thinking of something like liblzo, which would scale better
> even on faster nets. I notices a traffic of mean 1.5Mb/s with 4 clients
> compiling small c++ (with few standard headers) sources. (116 files,
> 10k lines unpreprocessed, 600k lines preprocessed). This already is
> the maximum for 10Mbps lines.

I have looked at liblzo in more detail, and it looks very promising.
It seems to squash .i files to about 33% of their size, rather than
about 22% for .gz, but it's much quicker.

Thanks for pointing it out!


More information about the distcc mailing list