[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: -j/-l : minimum of jobs running under max-load with auto = cpu+1 def
Re: -j/-l : minimum of jobs running under max-load with auto = cpu+1 default
Mon, 21 May 2018 18:33:33 +0200
Gnus (5.13), GNU Emacs 25.1.1 (x86_64-pc-linux-gnu)
Le 21/05/2018 à 08h23, Paul Smith a écrit :
> On Mon, 2018-05-21 at 08:36 +0200, Garreau, Alexandre wrote:
>> Then I discovered --load-average, and I’m asking myself wether the
>> optimum is -j n+1, -l 1.0, or -l n or n+1?
> IMO, there are too many problems with choosing any value as the default
> * It's not so simple to determine the number of CPUs, portably.
To me it looks more like a reason to have a such standardized unique
determination process inside make rather one a lot of separated
not-fully portable disfunctionnal and sparse ones inside shellscripts or
makefiles or automake files. Also useful stuff such as jobs, pipe stuff,
signals, aren’t fully portable either in make afaik, so would having
this to work, optionally, at list on the handful of most popular
platform be that an inacceptable thing?
> * Many makefiles invoke commands that themselves are multi-threaded
> and for these commands you definitely don't want to choose a -j
> value with "# of CPUs" parallelism.
That’s what manual specification of jobs numbers is for, I wasn’t
speaking about removing it but making it more accessible.
> * Many users of makefiles want to do other things with their systems
> in addition to builds, and don't want to choose a -j or -l value
> that makes their system slow or unusable. They'd rather their
> builds take longer.
Then maybe some suitable value can at the same time be acceptable while
useful to make parallelism more of a standard and tested thing? such as
“(min 2 (/ cores 4))”. That could be “auto”, and cores+1 could be “max”
or “fullspeed” or something alike.
> * There are other metrics besides # of CPUs that need to be taken into
> consideration; for example memory. I have a build environment which
> builds 200 different C++ programs, each of which ends up to be 200M
> or so, and the linker takes huge amounts of memory. If I use -j8 on
> my 8-core system and the rebuild only needs to re-link, my entire
> system will hang for many minutes swapping RAM (I can't even move
> the mouse), even though I have enough CPUs. If I choose -j5 or -j6,
> it works much better.
It seems to still be a reason to keep that number greater than one but
below a given threshold… also make could internally check such things
such as memory, without adding the cumbersome of manually checking for
available memory, average memory comsuption of process, etc. Ideally
users may add memory management options to their kernel/operating system
to avoid such inconvenience (afair, but I never got fully to it, under
Linux it’s something that deals with ulimits, the same kind of settings
that makes forkbombs not killing your system, but maybe kOpenBSD or Hurd
have something else more elaborated).
And still: you can still specify yourself the numbers of cores if you do
something like that (and I guess the discussed option since the
beginning here doesn’t imply switching parallelism by default).
Also I was suggesting not only to add a default (or several, that may be
triggered only by one or several keywords), but also to, when -l is
specified, make -j specify minimum jobs rather than maximum jobs
running. So -j cores+1 would always keep at least all my cores busy, and
would start other ones if that’s not enough because of some other
bottleneck (blocking I/O). Or is this actually *strictly* less useful
than the current behavior?