bug-gnu-emacs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#43389: 28.0.50; Emacs memory leaks


From: Florian Weimer
Subject: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 17 Nov 2020 17:33:13 +0100
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux)

* Eli Zaretskii:

>> There is an issue with reusing posix_memalign allocations.  On my system
>> (running Emacs 27.1 as supplied by Fedora 32), I only see such
>> allocations as the backing storage for the glib (sic) slab allocator.
>
> (By "backing storage" you mean malloc calls that request large chunks
> so that malloc obtains the memory from mmap?  Or do you mean something
> else?)

Larger chunks that are split up by the glib allocator.  Whether they are
allocated by mmap is unclear.

> Are the problems with posix_memalign also relevant to calls to
> aligned_alloc?  Emacs calls the latter _a_lot_, see lisp_align_malloc.

Ahh.  I don't see many such calls, even during heavy Gnus usage.  But
opening really large groups triggers such calls.

aligned_alloc is equally problematic.  I don't know if the Emacs
allocation pattern triggers the pathological behavior.

I seem to suffer from the problem as well.  glibc malloc currently maintains
more than 200 MiB of unused memory:

   <size from="1065345" to="153025249" total="226688532" count="20"/>

   <total type="fast" count="0" size="0"/>
   <total type="rest" count="3802" size="238948201"/>

Total RSS is 1 GiB, but even 1 GiB minus 200 MiB would be excessive.

It's possible to generate such statistics using GDB, by calling the
malloc_info function.

My Emacs process does not look like it suffered from the aligned_alloc
issue.  It would leave behind many smaller, unused allocations, not such
large ones.

>> It gets exercised mostly when creating UI elements, as far as I can
>> tell.
>
> I guess your build uses GTK as the toolkit?

I think so:

  GNU Emacs 27.1 (build 1, x86_64-redhat-linux-gnu, GTK+ Version
  3.24.21, cairo version 1.16.0) of 2020-08-20

>> The other issue we have is that thread counts has exceeded in recent
>> times more than system memory, and glibc basically scales RSS overhead
>> with thread count, not memory.  A use of libgomp suggests that many
>> threads might indeed be spawned.  If their lifetimes overlap, it would
>> not be unheard of to end up with some RSS overhead in the order of
>> peak-usage-per-thread times 8 times the number of hardware threads
>> supported by the system.  Setting MALLOC_ARENA_MAX to a small value
>> counteracts that, so it's very simple to experiment with it if you have
>> a working reproducer.
>
> "Small value" being something like 2?

Yes, that would be a good start.  But my Emacs process isn't affected by
this, so this setting wouldn't help there.

Thanks,
Florian
-- 
Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn,
Commercial register: Amtsgericht Muenchen, HRB 153243,
Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill






reply via email to

[Prev in Thread] Current Thread [Next in Thread]