bug-bash
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Parallelism a la make -j <n> / GNU parallel


From: Ole Tange
Subject: Re: Parallelism a la make -j <n> / GNU parallel
Date: Fri, 11 May 2012 23:57:33 +0200

On Thu, 3 May 2012 19:49:37, Colin McEwan wrote:

> I frequently find myself these days writing shell scripts, to run on
> multi-core machines, which could easily exploit lots of parallelism (eg. a
> batch of a hundred independent simulations).
>
> The basic parallelism construct of '&' for async execution is highly
> expressive, but it's not useful for this sort of use-case: starting up 100
> jobs at once will leave them competing, and lead to excessive context
> switching and paging.
>
> So for practical purposes, I find myself reaching for 'make -j<n>' or GNU
> parallel, both of which destroy the expressiveness of the shell script as I
> have to redirect commands and parameters to Makefiles or stdout, and
> wrestle with appropriate levels of quoting.
>
> What I would really *like* would be an extension to the shell which
> implements the same sort of parallelism-limiting / 'process pooling' found
> in make or 'parallel' via an operator in the shell language, similar to '&'
> which has semantics of *possibly* continuing asynchronously (like '&') if
> system resources allow, or waiting for the process to complete (';').
>
> Any thoughts, anyone?

Can you explain how that idea would differ from sem (Part of GNU Parallel)?

Example from the man page:

       Run one gzip process per CPU core. Block until a CPU core
becomes available.

        for i in `ls *.log` ; do
           echo $i
           sem -j+0 gzip $i ";" echo done
         done
         sem --wait

For quoting --shellquote in GNU Parallel may be of help.

/Ole
-- 
Did you get your GNU Parallel merchandise?
https://www.gnu.org/software/parallel/merchandise.html



reply via email to

[Prev in Thread] Current Thread [Next in Thread]