[distcc] Preprocessing limit
john moser
bluefoxicy at linux.net
Thu Feb 19 20:52:54 GMT 2004
distcc needs a way to limit how many preprocessing jobs it can run at once.
It may be advantageous to have, say, 150,000 make jobs (if you have a 100000
node computing network, for example; let's say HP decides it wants to wait
2 minutes to compile a new operating system and all its tools). Running
150,000 parallel preprocessings will take hours. After maybe 80% of that time,
a few jobs will trickle out to the compiling nodes.
Instead, one could limit how many preprocessings can occur. The distcc would
sleep until there's a free local preprocessing slot, and then run that
preprocessing, then ship out to a free node. In this way, the actual
efficiency will more effectively approach the theoretical efficiency.
Think about it. Need you wait 10 minutes with 50 jobs before shoving them out
the network? Are you always going to have enough processing power to get close
to theorectical values? What's the best way to get off the box ASAP?
_____________________________________________________________
Linux.Net -->Open Source to everyone
Powered by Linare Corporation
http://www.linare.com/
More information about the distcc
mailing list