[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Memory again
From: |
Dmitry Antipov |
Subject: |
Re: Memory again |
Date: |
Sun, 18 Dec 2011 19:13:25 +0400 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:8.0) Gecko/20111115 Thunderbird/8.0 |
On 12/17/2011 01:55 AM, Stefan Monnier wrote:
- Many memory problems are misdiagnosed as fragmentation problems, where
the real cause is either inefficient memory use, or memory leaks.
- Real fragmentation problems exist but are fairly rare.
It would be nice to have a built-in (optionally selected at configure time)
method to provide a 'shot' of the heap to see how it's (mis)used. It's also
interesting whether it's possible to write a set of gdb macros for doing
something similar.
- In some "fragmentation" cases, the Emacs process holds on to memory
without any good reason, i.e. it's been "free"d but the malloc library
does not want or can't return it to the OS because it did not use mmap
to allocate it. This can be fixed, but this memory would really be
unused; it can still appear in RSS but only because the memory
pressure hasn't yet bumped it out of RAM. IOW these cases may show
high memory use in terms of VSZ or RSS but fixing them is low priority
because their only direct cost is use of of swap space.
IIUC, this is not true, at least for Linux (see how zap_pte_range() updates
MM_ANONPAGES RSS counter; it's done when munmap() happens). Unmapped
(but still resident in RAM) pages aren't accounted as RSS of any process;
they're accounted separately and amount of space occupied by such pages is
'Active(anon)' in /proc/meminfo.
- Fixing the remaining real fragmentation problems probably requires
a different allocator that can move objects to compact the memory
space. Maybe such an allocator can be retrofitted into Emacs
(e.g. a mostly-copying allocator), but noone has tried it yet
(probably because the outcome is hard to predict, the problem it
attempts to fix only occurs rather rarely, and it can be difficult to
ensure that it doesn't affect negatively the more common cases).
It's not so hard to predict the memory-saving benefits of copying or
compacting collector - ideally such a collector should free everything
which is on a free lists in current collector, so an output of
garbage-collect may be used as a rough estimate of how much data can
be compacted. In my measurements, the typical amount of space which is
hold of a free lists is ~3-5% of total heap size; it's reasonable to expect
that copying/compacting collector may also decrease an overall heap
fragmentation, which may give a few more percents. Anyway, ~5% isn't
convincing enough to start a hard work on a new GC; instead, I believe
that some minor optimization of current algorithm and Lisp data representation
(vector allocation, compact or 'immediate' strings and something like cdr-coding
or unrolled lists) may give comparable, or even better, effect in the sense
of memory consumption.
Dmitry
- Re: Memory again, (continued)
- Re: Memory again, Stefan Monnier, 2011/12/06
- Re: Memory again, Nix, 2011/12/11
- Re: Memory again, Tim Connors, 2011/12/14
- Re: Memory again, Eli Zaretskii, 2011/12/14
- Re: Memory again, Tim Connors, 2011/12/14
- Re: Memory again, Eli Zaretskii, 2011/12/15
- Re: Memory again, Óscar Fuentes, 2011/12/14
- Re: Memory again, Eli Zaretskii, 2011/12/15
- Re: Memory again, Stefan Monnier, 2011/12/16
- Re: Memory again, Nix, 2011/12/17
- Re: Memory again,
Dmitry Antipov <=
- Re: Memory again, Stefan Monnier, 2011/12/18
- Re: Memory again, Dmitry Antipov, 2011/12/19
- Re: Memory again, Stefan Monnier, 2011/12/19
Re: Memory again, emacs user, 2011/12/19
Re: Memory again, emacs user, 2011/12/20