emacs-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Lost or corrupted `undo-tree' history


From: Alexander Shukaev
Subject: Re: Lost or corrupted `undo-tree' history
Date: Fri, 10 Jan 2020 11:27:17 +0100

On 1/10/20 10:31 AM, Eli Zaretskii wrote:
Cc: address@hidden
From: Alexander Shukaev <address@hidden>
Date: Fri, 10 Jan 2020 10:19:15 +0100

IMO, it would be a very bad mantra for a Lisp package operating on
this low level to disable GC, because that could cause the user's
system to run out of memory, and Emacs be killed by the likes of OOM
killer agents.  Disabling GC is barely a good idea on the
user-customization level, certainly not in packages such as undo-tree.

Maybe.  I was under impression that for a "short-running" function this
might be fine to prevent GC from running in-between some Emacs Lisp
instructions.

You can never know whether a given function is "short-running" or not
in Emacs, not with the multitude of hooks and advices we have.  You
_might_ be able to make that assumption safely on the C level, but
even there, it's non-trivial to prove to ourselves no Lisp could ever
run in between.  For Lisp code, this is simply impossible to prove.

Having said that, I agree that disabling GC sometimes may be dangerous
practice.  I remember how after reading [1], I tried that suggestion for
minibuffer.  After some time I noticed that I keep running out of
memory.  What was interesting is the actual test case.  For example, I
could run some search command which pushes matches into minibuffer, say
up to 100 000 matches.  As this is happening (and `gc-cons-threshold' is
`most-positive-fixnum'), I can see memory consumption of Emacs growing
rapidly say from 500MB up to 8GB at which point I cancel the search and
exit the minibuffer command.  As a result, `gc-cons-threshold' comes
back to the default value and garbage collecting starts immediately.
However, the memory consumption of Emacs does not fall back to 500MB,
but rather goes down to only e.g. 6GB, which afterwards are never ever
reclaimed.

I believe this is the expected behavior of memory allocation routines
on GNU/Linux.  Freed memory is not necessarily returned to the system,
but kept in the process's address space as free memory to be used for
future allocations.

If I repeat the test, the memory consumption would immediately
continue to grow off 6GB further as if those 6GB are not reused and
are somehow stuck holding something that cannot be reclaimed
anymore.  Hence, you can see that if I way same amount of time, the
memory consumption would go to around 14GB.  This is how one can
quickly run out of memory as if there would be some memory leaks
related to GC.

There are no memory leaks.  Just don't set gc-cons-threshold too high,
that's all.

Nevertheless, the second suggestion (for speeding up initialization
code) from [1], IMO, is a good example of temporarily blowing
`gc-cons-threshold' which I still use.

I disagree.  Kids, don't try that at home!  I posted several times on
reddit and elsewhere the procedure I propose to follow to raise the
threshold in a conservative and controlled way, to satisfy the
personal needs of any particular user, without risking any system-wide
degradation in memory pressure.


Understood. Also sifting through your comments on reddit, just found [1], sweet!

[1] http://akrl.sdf.org/#org2a987f7



reply via email to

[Prev in Thread] Current Thread [Next in Thread]