guix-patches
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[bug#66720] [PATCH] gnu: icecat: honor parallel-job-count.


From: Eric Bavier
Subject: [bug#66720] [PATCH] gnu: icecat: honor parallel-job-count.
Date: Thu, 26 Oct 2023 04:23:30 +0000

Hello Clément,

Thank you for your reply.  

On Wed, 2023-10-25 at 21:12 +0200, Clément Lassieur wrote:
> Eric Bavier <bavier@posteo.net> writes:
> 
> > -                     ;; mach will use parallel build if possible by default
> > -                     `(,@(if parallel-build?
> > -                             '()
> > -                             '("-j1"))
> > +                     ;; mach will use a wide parallel build if possible by
> > +                     ;; default, so reign it in if requested.
> 
> It seems like Icecat makes a choice based on available memory.  Why do
> you want to override this with something that would potentially not work
> if memory is lacking?

I think our concerns roughly overlap.

I wasn't aware that it considers available memory.  It didn't seem that way
to me.  I will typically set `--cores=2` for guix builds on my system, to
otherwise keep it available for other use.  Recently, with no substitute
available, I found my system grinding to a halt while building icecat.  It
was using every core on the system, and filling all of my RAM (I had not
activated a swap space at this point).

I think this is the code in question, from
./python/mozbuild/mozbuild/build_commands.py:

  if num_jobs == 0:                                                         
    if job_size == 0:                                                       
      job_size = 2.0 if self.substs.get("CC_TYPE") == "gcc" else 1.0  # GiB 
                                                                                
                                                                                
                                                                                
                              
    cpus = multiprocessing.cpu_count()                                      
    if not psutil or not job_size:                                          
      num_jobs = cpus                                                       
    else:                                                                   
      mem_gb = psutil.virtual_memory().total / 1024 ** 3                    
      from_mem = round(mem_gb / job_size)                                   
      num_jobs = max(1, min(cpus, from_mem))                                
      print(                                                                
          "  Parallelism determined by memory: using %d jobs for %d cores " 
          "based on %.1f GiB RAM and estimated job size of %.1f GiB"        
          % (num_jobs, cpus, mem_gb, job_size)                              
      )

So there's no fancy load balancing going on, just based on total virtual
memory, assuming 2GiB per build job.  For a dedicated build machine this is
probably fine, or in the situation you bring up, where total system memory is
lacking.  But it is not great when available free memory is lacking, such as
an in-use desktop system.

While it's not a perfect proxy for system load, I feel like, in general,
packages should honor the `--cores=N` build option.  If they take other
things into consideration, we should try to work with that too.  Perhaps we
could figure something out such that `--cores=N` sets an upper limit and
icecat's `mach` is able to cap that further based on system virtual memory?

`~Eric





reply via email to

[Prev in Thread] Current Thread [Next in Thread]