emacs-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: When should ralloc.c be used?


From: Eli Zaretskii
Subject: Re: When should ralloc.c be used?
Date: Fri, 28 Oct 2016 12:43:02 +0300

> Cc: address@hidden, address@hidden, address@hidden
> From: Daniel Colascione <address@hidden>
> Date: Fri, 28 Oct 2016 01:44:33 -0700
> 
> >> Say you have a strict-accounting system with 1GB of RAM and 1GB of swap.
> >> I can write a program that reserves 20GB of address space.
> >
> > I thought such a reservation should fail, because you don't have
> > enough virtual memory for 20GB of addresses.  IOW, I thought the
> > ability to reserve address space is restricted by the actual amount of
> > virtual memory available on the system at the time of the call.  You
> > seem to say I was wrong.
> 
> I'm not sure you're even wrong :-) What does "virtual memory" mean to 
> you?

Physical + swap, as usual.

> When we allocate memory, we can consume two resources: address space and 
> commit. That 100GB mmap above doesn't consume virtual memory, but it 
> does consume address space. Address space is a finite resource, but 
> usually much larger than commit, which is the sum of RAM and swap space. 
> When you commit a page, the resource you're consuming is commit.

If reserving a range of addresses doesn't necessarily mean they will
be later available for committing, then what is the purpose of
reserving them in the first place?  What good does it do?

We have in w32heap.c:mmap_realloc code that attempts to commit pages
that were previously reserved.  That code does recover from a failure
to commit, but such a failure is deemed unusual and causes special
warnings under debugger.  I never saw these warnings happen, except
when we had bugs in that code.  You seem to say that this is based on
false premises, and there's nothing unusual about MEM_COMMIT to fail
for the range of pages previously reserved with MEM_RESERVE.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]