parallel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: controlling memory use beyond --noswap


From: Rob Sargent
Subject: Re: controlling memory use beyond --noswap
Date: Wed, 30 Apr 2014 15:41:12 -0600
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0

Seems to me with 64G ram and an app that may take 60G you can only safely run one process, unless you're comfortable swaping. You would need a swap file that was at least another 64G (see swapfile command) and then you could maybe use --jobs 2. Three if you're feeling lucky.

rjs


On 04/30/2014 03:35 PM, B. Franz Lang wrote:
Hi there

I have been trying to find a way that allows the use of 'parallel'
without completely freezing machines --- which in my case
is due to the parallel execution
of very memory-hungry applications (like a server that has 64 GB
memory, and one instance of an application - unforeseeable -
between 10-60 GB). If a couple of them are started in parallel, --noswap
is unable to master the situation, sometimes running into
swap usage beyond the allocated space, and jobs dropped by the system
(in addition to freezing the server almost solid and taking more time
than not using parallel code).

I am currently using a rather awkward workaround
by estimating memory usage with commands like  /usr/bin/time -f "%M %P"
beforehand, to direct the number of parallel processes.  Not ideal.
Is there an easy way around this, or an intention to add features
that would help under such conditions? I could think about having
a first instance of a process be run to sense memory usage, before
sending of the following ones.

Cheers Franz






reply via email to

[Prev in Thread] Current Thread [Next in Thread]