[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Unmapping fpages
Neal H. Walfield
Re: Unmapping fpages
Wed, 29 Dec 2004 11:24:15 +0000
Wanderlust/2.10.1 (Watching The Wheels) SEMI/1.14.6 (Maruoka) FLIM/1.14.6 (Marutamachi) APEL/10.6 Emacs/21.2 (i386-debian-linux-gnu) MULE/5.0 (SAKAKI)
At Wed, 29 Dec 2004 02:31:01 +0100,
Espen Skoglund wrote:
> [Neal H Walfield]
> > We are running into a problem when the client deallocates the
> > physical memory. physmem needs to make sure that it doesn't have an
> > exant mapping and it cannot trust (most) clients to do an l4_unmap.
> Ask yourself: in which case does it really matter that the client does
> not have read-only access to the memory (e.g., to the C library)?
> Surely, the client can not modify the memory. And if the contents of
> the memory will not change (as probably is the case with a library),
> there is no leakage of information anyway. Why not just trust the
> client to perform the unmap in these cases?
The problem isn't giving tasks continued readonly access to the C
library. The problem is providing a way to assure access to memory is
Let's say that we have some code in the C library which some tasks
use. That is, they request the data from the file system and have all
received a read-only map of it from physmem. Eventually, these tasks
may no longer need the data and deallocate it. Alternatively, but
equally important and functionally equivalent, the tasks may be forced
to release the physical memory because of memory pressure. Let's
assume that one of the tasks is malicious and doesn't unmap the
memory. Once physmem reallocates the memory, the malicious task gets
a readonly window into the task that allocated it.
So it is not a question of simply having continued read-only access to
the C library after allegedly releasing physical memory: the problem
is giving tasks access to other tasks' potentially sensitive
In short: we have to multiplex memory because memory is a scarce
resource. Your solution seems to assume that memory is allocated once
and never multiplexed. Physical memory is only a cache of the
underlying backing store. It is true as you say, the contents of
shared objects won't change, however, the physical memory which
provides a temporary mapping to it will.
> If it happens that one
> particular client mapping needs to be revoked (e.g., due to it being a
> read-write mapping), and should be revoked separately from the other
> mappings, then you'll have to use some sort of alias mapping that the
> server *only* maps to one client so that he can revoke this one
> mapping at a later stage.
Where would you keep the alias mappings? In a proxy task as I
> Yes, I know, the situation is not ideal. We do, however, have ideas
> on how to remedy the situation should it prove to be a real problem.
Can you give us some idea of that your thoughts are so that we could
integrate them into our plans?
> > The other problem, and the one which is far worse in my analysis, is
> > that physmem cannot actually flush the 16kb fpage that it gave to
> > the client: it must flush the fpage that it has because it would be
> > "[p]artially unmapping an fpage [which] might or might not work"
> > (idem).
> The "partially unmapping" refers to established mappings. The server
> would revoke the existing mapping in the client, not in the server
> itself (i.e., it would do an unmap rather than a flush). As such,
> given that the server only did a partial mapping to the client in the
> first place, the "partial unmap" does not apply here.
Excuse me, I had unmap and flush flip flopped in my head.
Could you confirm then that if there is a 4MB mapping in physmem and
it maps the first 4kb to a client task that physmem can call unmap on
the 4kb fpage and expect the client task to no longer have a mapping?
> > We could impose the requirement that memory be mapped to the proxy
> > task at most once. Thus, if 4kb of a block of memory is mapped and
> > later a request (either from the same task or from a different task)
> > for a 16kb map for the same block of memory which includes the 4kb
> > area is requested and the 4kb area is not properly aligned in the
> > proxy task, then we don't offer the 16kb but 4 4kb maps. This,
> > however, seems like a gratuitous limitation...
> Why is this such a limitiation? What do you have to gain by having an
> unaligned 16KB "page" instead of 4*4KB "pages"? For sure, you will
> gain nothing in TLB coverage since the hardware will have to treat it
> as 4 separate pages. The in-kenrel page tables and the mapping
> database would also have to treat it as 4 separate pages. Your only
> gain is perhaps some savings in the data structures used to implement
> your memory allocation stuff. However, since you're now dealing with
> unaligned pages in these structures, the data structures themselves
> and the algorithm to deal with them become more complex.
What I was trying to say was that physmem would have a 16kb mapping
aligned on a 16kb boundary. It maps 4kb of that to the proxy task.
The proxy task, accepts it at an address which is not aligned on a
16kb boundary (it doesn't know that the memory in physmem came from a
16kb boundary) but on a 4kb boundary. Later, someone requests a 16kb
mapping. We now have to map over the old mapping since we have the
requirement that it not be mapped in the proxy task twice. If we try
to reuse the old mapping the we lose the alignment.
Anyways, it has occured to me that we only need to keep a one-to-one
mapping between the location of the memory in physmem and in the proxy
task and this problem disappears.
> > ...and moreover would mean that we could not flush mappings on an
> > per-task basis.
> How does this relate to unaligned mappings? I don't understand.
What I am trying to say here is that we would like unmap on a per-task
basis. If physmem maps fpage X to client A and B and then A
deallocates it but not B can we unmap (from physmem) only the mapping
from A? I think this would only be possible if we used a proxy task
and had a mapping from physmem at address X to proxy at address Y to A
and a second mapping from physmem at address X to proxy at address Z
to B. The if we want to only unmap the mapping in A we need to unmap
the mapping from physmem to A (i.e. the mapping at Y in the proxy