[distcc] Re: Parallel compiles of same source file with distcc

mbp at sourcefrog.net mbp at sourcefrog.net
Mon Jul 26 19:22:02 GMT 2004

On 26 Jul 2004, Terje Elde <terje at elde.org> wrote:
> Hi,
> I've been trying out distcc to ease my pain of slow hardware.  So far it's
> doing a great job.  I do however notice that it seems to often block of the
> weakest link.
> Currently I'm doing a compile job with the following setup:
> Main machine - Piii 450Mhz
> Cluster machine 1 - Dual Pii 266Mhz
> Cluster machine 2 - Single  1000Mhz
> I notice that the cluster tends to idle a lot if the dual 266Mhz gets a
> compile job late in a directory.  Make and gmake will wait with giving out new
> jobs until everything is finished up, whatever is in the dir have been linked
> etc.
> The best fix to this might be to replace the make system with something
> allowing the idle machines to work on compiling sources elsewhere in whichever
> project is being compiled, but that's a rather large task.
> What did occur to me as a potentially simple optimizarion is for distcc to
> track which files is being compiled where, and when a machine goes idle, it
> can start compiling on the same sourcefile, and whichever copy gets back in
> compiled form first gets used.
> IE: Everything is waiting on the SMP machine, so the task is handed out to the
> 1000Mhz as well.  When it finishes, the compile on the SMP can be killed, and
> life goes on.
> Currently distcc has the potential to actually be slower, because make can end
> up blocking on a compile at a slow machine.  The above would fix this.

This might be helpful.  However, remember that distcc has no global
view of the whole compile.  

Suppose at time T there is only one job running remotely.  We might
try scheduling it on other machines as well to see if they finish

But then suppose at T+1 the build tool starts ten new jobs.  Do we
cancel all the duplicated jobs so we have space to run the new jobs?
Which ones do we cancel, the ones that started earlier or the ones
that are on faster machines?  Or do we let them all continue and make
the new jobs wait?

So it's potentially helpful, but more complex than it may at first
appear.  For the moment:

 - don't use machines that are much slower than the client

 - avoid recursive make

-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
Url : http://lists.samba.org/archive/distcc/attachments/20040726/c4c0baea/attachment.bin

More information about the distcc mailing list