[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Swapping pagers out

From: Marcus Brinkmann
Subject: Re: Swapping pagers out
Date: Fri, 11 Feb 2005 14:12:28 +0100
User-agent: Wanderlust/2.10.1 (Watching The Wheels) SEMI/1.14.6 (Maruoka) FLIM/1.14.6 (Marutamachi) APEL/10.6 Emacs/21.3 (i386-pc-linux-gnu) MULE/5.0 (SAKAKI)

At Fri, 11 Feb 2005 12:23:48 +0100,
"address@hidden" <address@hidden> wrote:
> I have read the design document : it seems that it is not possible
> for a task to be completely paged out, because of the pager thread.
> Am I correct ?

This particular detail has been changed a bit.  Here is my latest
thought on this topic (Neal may have a different opinion, and as far
as I know nothing of this is really settled down in detail yet).

(I write "there is" where you should read: "In my current design,
there would be" etc).

There is only one pager thread in every task.  This pager thread
serves all page faults, but it handles them differently.  Pagefaults
going to anonymous memory or other existing memory objects (ie,
containers) are handled by mapping in the memory from physmem (if
necessary, allocating it).  For page faults going to mapped regions
from files which are not yet loaded into containers can be handled by
the faulting thread itself with a little trick: The pager can make
room for the data that needs to be loaded and mapped, and make the
faulting thread jump to an exception handler.  The exception handler
can do the filesystem RPCs to load in the filesystem data, and can
then provide this back to the pager before jumping back into the real

This way the pager will never make any blocking RPCs to filesystems -
the only RPCs it makes are to the trusted physmem server.  This design
thus allows to run the pager service single-threaded, or with one
thread per CPU if that's needed.

Now, about paging out the whole task.  It was always an open question
how to make sure the pager code is not paged out, as it is important
for physmem to reorganize physical data and thus to arbitrarily unmap

My idea was to have a single container associated with each task in
physmem that is used as a last resort pager.  This means that any page
faults in the pager thread will be handled by physmem by providing 1:1
mappings (vaddr == offset in container) out of the designated
container.  In fact, for very simple tasks physmem would be the only
pager needed.  Every task would start out as a very simple task (the
startup code) until it sets up its own real pager service.

This designated container would be created by the task creator via
shared copying from other containers, and could container code and
data sections and anonymous memory.

How does all of this answer your question?  Well, as soon as we have
this, physmem could in fact be made to swap out the whole task (except
for active DMA regions).  So, the constraints you mention that follow
from the tex document in the source would not apply anymore.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]