bug-make
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Prioritizing non-dependent targets in parallel make


From: Tim Murphy
Subject: Re: Prioritizing non-dependent targets in parallel make
Date: Tue, 5 Jan 2010 09:15:45 +0000

2010/1/4 Eric Melski <address@hidden>:
>
> Hi Tim!
>
> ElectricAccelerator doesn't factor runtimes into scheduling decisions,
> although we have talked about doing so in the past.  I spent some time
> investigating the possibility, most recently a couple years ago.  What I did
> was tweak the Simulator report in ElectricInsight to simulate this kind of
> scheduling, so each time it chooses the next job to "run", it picks the
> longest job of those that are available to start.  I ran the simulation on a
> variety of real-world builds.
>

Hi Eric, Happy New Year :-)

> Unfortunately, the results were disappointing.  In my recollection, the best
> case gave a savings of about 5% of the total build time, and in general it
> was barely measurable -- maybe 1%.  It seems that in practice, most of the
> jobs in a build are about the same length.  The notable exception is the
> final link, which is of course much longer. But then -- of course -- that
> can't start until all the other jobs are done anyway, so the fancy
> scheduling doesn't help.

I see what you mean.  Where all tasks are a lot shorter than the
(total serial build time)/(#ofagents) there is not much point in this.
 It is only useful when you have something odd that takes an unusual
amount of time.   We now have a few instances of these where the task
is something unusually large (an absolutely gigantic link in one case
or parallel makefile generation for a particularly big and indivisible
lump of the OS in another).   When this ends up being done at the very
end of the build (rare) it makes a difference.

On the whole, though, it is far too rare and the penalties are in the
order of, perhaps, 5-10 minutes which is only a tiny % increment to
the build time - so not worth getting too excited about.

> I'll see if I can dig up the patch for ElectricInsight and maybe write it up
> in a blog. up that code, and maybe write it up in a blog.

That would be very cool. thanks.

We are actually getting to the point where we need faster compilers to
have any improvement, basically. :-)  In fact another problem with
parallel/cluster builds is having some way to identify what the
scaling bottlenecks really are.  e.g. is the compiler slow or is it
just taking a long time to load from the shared disc?  Is the network
congested or is the destination output drive too slow?  Anyhow, there
are ways to gather all of this, I realise, but it seems to be fairly
challenging to put it all together into a correct picture.  I am even
finding it hard to work out if memory is a bottleneck, can you believe
it - virtual memory makes everything hard to understand. :-) Moan,
moan, moan, whinge, gripe ;-)


Cheers,

Tim



Regards,

Tim
-- 
You could help some brave and decent people to have access to
uncensored news by making a donation at:

http://www.thezimbabwean.co.uk/




reply via email to

[Prev in Thread] Current Thread [Next in Thread]