[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Paging Interface
Neal H. Walfield
Re: Paging Interface
15 Jun 2002 14:21:19 +0200
Gnus/5.0808 (Gnus v5.8.8) Emacs/21.2
> What if several processes wants to map the same thing? E.g. using mmap
> on a device or an ordinary file. Who's responsibility is it that they
> really end up sharing the same pages?
The virtual memory manager, which is to say, Mach, sees that the
processes have all mapped the same object. When a page fault occurs,
the VMM searches its internal cache for the page, if it is not found,
it contacts the appropriate manger indicating the page that it needs
(memory_object_request). Mach currently does not tell (nor have a
mechanism for finding out) which client faulted the page in. This
limitation is a direct result of only being able to reference a memory
object with a single port. That is, a server must give out port
rights to the memory object itself: there is no level of indirection.
A direct consequence of this is that there is no way to do fine gained
access control: all clients either have read/write access or none do.
This limitation will be corrected in the L4 model.
> > One of my idea is to have a pager thread for each task.
> Would this pager be a thread live inside the task which it serves? Or
> in some separate more-or-less privileged task?
Either internally or externally, but certainly not in a privileged
> > What is not clear to me, however, is how we should manage physical
> > memory. Would it be appropriate to have make every server a vm
> > server?
> I don't think I understand you here.
My qualm is that I think that we can export the paging policy to user
space which is not currently done, however, I do not know how this
should be done.
> I think the simplest way to get
> started would be to have a server that talks to sigma0 and grabs all
> the physical memory, and which implements an interface that let's
> other servers allocate *and* deallocate pages. Optionally, one may
> also add some protocol by which it could ask its clients to return
> some pages.
Well, at a very minimum, we need to also share, copy and transfer
pages. And since we are already this complex, we may as well try
getting a reasonable framework up and running.
> I'm curious about how such a server relates to the uvm memory system
> Farid has talked about, but I guess uvm is a higher level thing?
I think that UVM has a lot of important ideas that we can pull from.
However, I do am not sure that it can play a larger role than the
anonymous memory server and default pager.
> Exactly what is the role of the default pager? My guess is as follows:
> If no swap space is registered, it will only hand out real physical
> pages, similarly to the above memory server. Clients can register as
> swap space handlers, and when that happens, the default pager can hand
> out more pages, and coordinate with the swap space handlers when pages
> need to be swapped in or out.
In Mach, the role of the default pager is to accept pages from Mach
and send them to swap. All physical memory management is done by Mach
> Finally, there ought to be some hooks where the posix servers can get
> into the memory allocation process, in order to implement limits.
Limits will always be a problem when we have a distributed
architecture where untrusted servers allocate on the behalf of
> > Perhaps, we should have multiply core servers which compete for
> > physical memory at startup and then give it out according to some as
> > of yet undetermined algorithm?
> If you have several "core servers", they could cooperate by allocating
> the pages they need from the above memory server.
> But I'm not sure that's really useful. If what's needed is simply a
> default pager (it's role is not entirely clear to me, though), then it
> may make more sense to have a single default-pager server and let that
> grab all sigma0's memory at startup.
Kevin suggested that an economic model might be interesting. A
physical memory server would be started on system boot and then all
tasks would be allocated so much money on start up. They could use
this to barter with others and the physical memory server itself. As
physical memory ran low, the physical memory server could offer to buy
pages back and raise the price of memory. If memory was plentiful,
prices would go down, etc.
I am not sure that this is a model that we need right now or one that
I am even capable of implementing, however, it is certainly
interesting and would make a good research project.
My question was (and still is): do we want to stay with the current
model for managing memory? What are its flaws and what can we do to
correct them in both the near term and the long term?