emacs-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Larger GC thresholds for non-interactive Emacs


From: Stefan Monnier
Subject: Re: Larger GC thresholds for non-interactive Emacs
Date: Fri, 17 Jun 2022 22:32:30 -0400
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/29.0.50 (gnu/linux)

> Yup.  But do you mean in general?  I.e., -batch would set that variable
> to 2.0?  Would there be any likely major repercussions -- i.e., jobs
> that used to run fine would run out of memory?

I've looked a bit further at some of the repercussions.  The most
obvious immediate one is that a program that needs 10MB of live data
will now use up 30MB of heap (10MB of live data at the last GC plus upto
20MB of data allocated since the last GC), so it will increase the
process sizes in a non-trivial way.

I'd lean towards 1.0 instead of 2.0 for that kind of reason.

I also took a look at related data.  E.g. comparing p=1.0 to the old
p-0.1 on the process that performs the highest number of GCs (188 cycles
with p=1.0 and 650 cycles with p=0.1) during an Emacs build.  Here are
the corresponding last few GC cycles:

    % grep GC-26227 ./+make-0.1.log | tail -n 10              
    GC-26227 p=0.1 total=18.8M free=1.9M thresold=1.9M
    GC-26227 p=0.1 total=18.6M free=2.0M thresold=1.9M
    GC-26227 p=0.1 total=18.7M free=1.9M thresold=1.9M
    GC-26227 p=0.1 total=18.7M free=1.9M thresold=1.9M
    GC-26227 p=0.1 total=18.8M free=1.9M thresold=1.9M
    GC-26227 p=0.1 total=18.8M free=1.9M thresold=1.9M
    GC-26227 p=0.1 total=18.8M free=1.9M thresold=1.9M
    GC-26227 p=0.1 total=18.8M free=1.8M thresold=1.9M
    GC-26227 p=0.1 total=145.7M free=32.6M thresold=14.6M
    GC-26227 p=0.1 total=132.8M free=60.7M thresold=13.3M
    % grep GC-898 ./+make-1.0.log | tail -n 10              
    GC-898 p=1.0 total=18.5M free=7.5M thresold=18.5M
    GC-898 p=1.0 total=18.6M free=7.4M thresold=18.6M
    GC-898 p=1.0 total=18.6M free=7.4M thresold=18.6M
    GC-898 p=1.0 total=18.7M free=7.4M thresold=18.7M
    GC-898 p=1.0 total=18.7M free=7.3M thresold=18.7M
    GC-898 p=1.0 total=18.8M free=7.3M thresold=18.8M
    GC-898 p=1.0 total=18.8M free=7.3M thresold=18.8M
    GC-898 p=1.0 total=18.8M free=7.3M thresold=18.8M
    GC-898 p=1.0 total=18.8M free=7.3M thresold=18.8M
    GC-898 p=1.0 total=145.7M free=32.6M thresold=145.7M
    % 

Obviously this process ends up with a very large one-step allocation
which is arguably interesting in itself (I suspect there's some
GC-inhibition going on there), but the more interesting point
is that with p=0.1 we get an amount of free space after GC
(i.e. blocks we can't release, because of fragmentation) that's about as
large as the next threshold, i.e. about 10%, whereas with p=1.0 this
amount of unreleasable free space is significantly higher.  Most likely
this is not wasted space: a lot of it will be re-used for new
allocations, but still with 19MB of live data, we end up with 7MB of
space we can't release back.

I think p=1.0 (and the corresponding implication that we use up about
twice as much memory as the minimum we need) might be an acceptable
tradeoff (for batch use), but I don't think I'd be comfortable going
beyond that.


        Stefan




reply via email to

[Prev in Thread] Current Thread [Next in Thread]