[distcc] Re: Suggestions for distcc (fwd)

Martin Pool mbp at samba.org
Wed Sep 4 00:32:00 GMT 2002

On  4 Sep 2002, Oscar Esteban <flesteban at mi.madritel.es> wrote:
> > > The objective would be to give some load to each host. [....]
> > 
> > It already does what you described. :-)  Check the manual.
> Yes, so it says. Blame on me! But watching the source, I do not come to
> that conclusion. I'm looking at L208 at where.c. It tries to lock a host in
> order to send it a job. But there is no way to send more than one job to a
> host, as the filename through which the lock is implemented depends only
> on the host's name, regardless of how many times it appears in
> DISTCC_HOSTS.  An 'iter' is used, but it is independent on which element
> of 'hostlist' we're looking at. 

> So the only way for a host to get two jobs is incrementing i_try,
> that is, iterating through the whole hostlist.

That's correct.  First every host gets one job (in order), then two,
then three, etc.  Of course it is a bit more complex in practice
because the earlier jobs will finish, but not necessarily in the order
they were started.

> I see some 'theorical' problems, such as it seems that hosts that appear
> first in the DISTCC_HOSTS have higher probability of working than others,
> but can not think of a practical situacion where this has any
> impact.

In a way that is a feature: you can list closer or faster hosts first
and they will be slightly preferred.

> What I foresee are limitations to the way things are done. I am comparing
> all the time to the coordinating server, who could have a better chance of
> using information about the client's overall load (not just MY delegated
> jobs), or anything else that would be benefited of some processing
> capabilities. I guess this is not a really important matter; it works
> already, and saving time is what distcc must achieve.

Here are some problems I thought of with that approach:

 - Defining "load" is actually pretty hard: you possibly have to take
   into account free CPU cycles, virtual memory, disk bandwidth, etc.
   It's easy for a machine to be 80% CPU idle, but have no free memory
   and so be unable to run a compiler.

 - SMP machines complicate it even more, because adding additional
   jobs may not make them slow down if there are spare CPUs.

 - Normally we want to e.g. avoid swapping.  However, for some
   extremely large programs (or small machines) swapping may be
   unavoidable and desired by the user.  Also, it can be hard to
   distinguish swapping from normal VM pressure: if lots of swap is
   allocated but not touched it doesn't mean we need to avoid the

 - Compilation jobs are small but intense, and so possibly hard to
   smooth out across machines: the machine will be flat out for a few
   seconds and then idle again.  Many load measurements (such as unix
   load average) don't measure this sort of thing well.

 - Because the jobs are quite short, the machine can go from being
   idle to overloaded very quickly, so it's hard for the central
   controller to really "remember" how busy each machine is.  Indeed,
   the situation may change completely between it deciding to use a
   machine, and the work actually starting.

I'm not saying it's impossible, but it seems quite hard.  If you
invent a better load spreading algorithm of course I would be happy to
hear of it.  Having a central controller is a more complex design, so
I don't want to use it unless it will clearly perform better.

So at the moment I'm planning on putting in server-side limits on
concurrent tasks, and perhaps something that takes into account the
machine's load average.  This second one is a crude way of avoiding
interference with interactive use.  Beyond that I think we just have
to rely on the kernel's scheduler treating nice tasks appropriately.

Happy hacking!


More information about the distcc mailing list