bug-gnu-emacs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#43389: 28.0.50; Emacs memory leaks


From: Carlos O'Donell
Subject: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 17 Nov 2020 11:32:23 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.3.1

On 11/17/20 10:45 AM, Eli Zaretskii wrote:
>> From: Florian Weimer <fweimer@redhat.com>
>> Cc: carlos@redhat.com,  dj@redhat.com,  43389@debbugs.gnu.org
>> Date: Mon, 16 Nov 2020 21:42:39 +0100
>> There is an issue with reusing posix_memalign allocations.  On my system
>> (running Emacs 27.1 as supplied by Fedora 32), I only see such
>> allocations as the backing storage for the glib (sic) slab allocator.
> 
> (By "backing storage" you mean malloc calls that request large chunks
> so that malloc obtains the memory from mmap?  Or do you mean something
> else?)

In this case I expect Florian means that glib (sic), which is a slab
allocator, needs to allocate an aligned slab (long lived) and so uses
posix_memalign to create such an allocation. Therefore these long-lived
aligned allocations should not cause significant internal fragmentation.
 
> Are the problems with posix_memalign also relevant to calls to
> aligned_alloc?  Emacs calls the latter _a_lot_, see lisp_align_malloc.

All aligned allocations suffer from an algorithmic defect that causes
subsequent allocations of the same alignment to be unable to use previously
free'd aligned chunks. This causes aligned allocations to internally
fragment the heap and this internal fragmentation could spread to the
entire heap and cause heap growth.

The WIP glibc patch is here (June 2019):
https://lists.fedoraproject.org/archives/list/glibc@lists.fedoraproject.org/thread/2PCHP5UWONIOAEUG34YBAQQYD7JL5JJ4/

>> The other issue we have is that thread counts has exceeded in recent
>> times more than system memory, and glibc basically scales RSS overhead
>> with thread count, not memory.  A use of libgomp suggests that many
>> threads might indeed be spawned.  If their lifetimes overlap, it would
>> not be unheard of to end up with some RSS overhead in the order of
>> peak-usage-per-thread times 8 times the number of hardware threads
>> supported by the system.  Setting MALLOC_ARENA_MAX to a small value
>> counteracts that, so it's very simple to experiment with it if you have
>> a working reproducer.
> 
> "Small value" being something like 2?

The current code creates 8 arenas per core on a 64-bit system.

You could set it to 1 arena per core to force more threads into the 
arenas and push them to reuse more chunks.

export MALLOC_ARENA_MAX=$(nproc)

And see if that helps.
 
> Emacs doesn't use libgomp, I think that comes from ImageMagick, and
> most people who reported these problems use Emacs that wasn't built
> with ImageMagick.  The only other source of threads in Emacs I know of
> is GTK, but AFAIK it starts a small number of them, like 4.
> 
> In any case, experimenting with MALLOC_ARENA_MAX is easy, so I think
> we should ask the people who experience this to try that.
> 
> Any other suggestions or thoughts?

Yes, we have malloc trace utilities for capturing and simulating traces
from applications:

https://pagure.io/glibc-malloc-trace-utils

If you can capture the application allocations with the tracer then we
should be able to reproduce it locally and observe the problem.

-- 
Cheers,
Carlos.






reply via email to

[Prev in Thread] Current Thread [Next in Thread]