[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Parallel make with distributed systems
From: |
SF Markus Elfring |
Subject: |
Re: Parallel make with distributed systems |
Date: |
Sun, 03 May 2015 19:42:55 +0200 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.6.0 |
> The question is, why is it the case that these challenges, tasks, work,
> and constraints need to be handled within make itself,
> rather than farmed out to a separate process via the SHELL capability?
Does it matter to get the accounting for the processor cores right?
> Offhand I can't see any reason why make has to internalize this effort:
> it seems cleaner, and simpler, to me to have different job management
> environments provide their own command line tool that can submit jobs to
> be run, which could be used by any build system, and integrate that with
> make via the SHELL variable rather than through a compiled-in API.
Can a build tool make more efficient decisions from its knowledge
about software dependencies?
How many dependency management needs to be copied by an other tool
in this application context?
> It's quite possible I'm just not seeing the benefits; if you have a
> specific use-case in mind and could describe them, that would be very
> helpful to the discussion.
I would like to get access to more data processing resources
a bit easier.
Regards,
Markus