qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Qemu and heavily increased RSS usage


From: Dr. David Alan Gilbert
Subject: Re: [Qemu-devel] Qemu and heavily increased RSS usage
Date: Fri, 24 Jun 2016 10:57:55 +0100
User-agent: Mutt/1.6.1 (2016-04-27)

* Peter Lieven (address@hidden) wrote:
> Am 24.06.2016 um 11:37 schrieb Stefan Hajnoczi:
> > On Wed, Jun 22, 2016 at 09:56:06PM +0100, Peter Maydell wrote:
> >> On 22 June 2016 at 20:55, Peter Lieven <address@hidden> wrote:
> >>> What makes the coroutine pool memory intensive is the stack size of 1MB 
> >>> per
> >>> coroutine. Is it really necessary to have such a big stack?
> >> That reminds me that I was wondering if we should allocate
> >> our coroutine stacks with MAP_GROWSDOWN (though if we're
> >> not actually using 1MB of stack then it's only going to
> >> be eating virtual memory, not necessarily real memory.)
> > Yes, MAP_GROWSDOWN will not reduce RSS.
> 
> Yes, I can confirm just tested...
> 
> >
> > It's possible that we can reduce RSS usage of the coroutine pool but it
> > will require someone to profile the pool usage patterns.
> 
> It would be interesting to see what stack size we really need. Is it possible
> to automatically detect this value (at compile time?)
> 
> I can also confirm that the coroutine pool is the second major RSS user beside
> heap fragmentation.

But is it there stack? You said you tried marking GROWSDOWN, so can you check
 /proc/../smaps and see how much of the Rss is the growsdown space?

Dave

> Lowering the mmap threshold of malloc to about 32k also gives good results.
> In this case there are very few active mappings in the running vServer, but 
> the
> RSS is still at about 50MB (without coroutine pool). Maybe it would be good
> to identify which parts of Qemu malloc lets say >16kB and convert them to mmap
> if it is feasible.
> 
> Peter
> 
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]