[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Position paper

From: Tom Bachmann
Subject: Re: Position paper
Date: Sat, 06 Jan 2007 19:26:32 +0100
User-agent: Thunderbird (X11/20061231)

Hash: SHA1

Neal H. Walfield schrieb:
> [...] We are also very interested in discussing reactions to
> this proposal.

The paper tries to articulate problems and solutions. While I think it
does so pretty well, the information density in the text is very high
(certainly due to the five pages limit imposed), and therefore it is
relatively hard to "fill in the gaps" of the proposed lowlevel system
structure (i.e. resource pools). I think it'd be great if you could
elaborate on that a bit, concretizing the paper.
I will outline my present understanding (or better, interpolation) of it
below, in the hope of making the task easier for you (so you "just" have
to correct the misstakes, which, however, will certainly be many).

A resource pool is an abstract entity to manage accounting of resources
(e.g. cpu time). Associated with it is a scheduling policy (which might
e.g. include a quota). A resource pool can be used to allocate from it,
deallocate, subpools can be created (with equal policy), the scheduling
policy can be changed (e.g. to shrink the resources available from a
subpool to be given to a child process) to be "worse" than before (e.g.
a smaller quota), and it can be destroyed, destroying all subpools
created as well. The mechanism of creating subpools effectively
organizes resource pool as a tree. There exists a "master pool" from
which all other pools are (directly or indirectly) derived, which
scheduling policy basically says "all of the resource is available from
this pool".

Now there are three types of resource pools, for cpu time, for main
memory, and for backing store memory.

Those pools for cpu time are clearest, I think: they have policies like
"at least 5% of the available time, with priority 7 to get more time" or
"run at least once every 10ms for 1ms", allowing for both real-time and
time-shared processing.
What other examples of scheduling policies exist?
However, it is not clear to me how the policies creatable in this way
can be ordered (e.g., can a pool of the first type exemplfied created as
a subpool of the second example?).
I don't think cpu time pools are to be passed to servers. Although this
would increase accounting, it would as well horrify the complexity of
the server and require special kernel support, as has been discussed on
the list (or on coyotos-dev?).

The pools for main and backing store memory are more complicated, as
they interact. Main memory policies are probably in the spirit of "at
least 25 pages accessible at any time" (or maybe even "exactly 25 pages
accessible at any time"), backing store policies being similar.
What other policies exist? How are they ordered?
But what happens when pages have to be freed (e.g. because the parent
shrinks the main memory pool size)? As memory pools are to be passed to
servers, it must be possible to specify from which backing store pool
the backing store memory is to be taken when a page is written to disk.
So, as it appears to me, an allocation of a main memory page has to have
a (potentially void, indicating discardability) backing store allocation
as an argument, and, as explained in the paper, a priority that gives
the order in which pages are to be freed. "Backing store pages" can be
allocated at will.
What happens when the page with the lowest priority is paged out, then
referenced, which in turn causes the second lowest priority page to be
freed, which is then next touched, paged in, causing the lowest priority
page to be freed, and so on? That is, how are malicious applications
stopped from slowing down the system by dictating a very bad page-out
- --
- -ness-
Version: GnuPG v1.4.6 (GNU/Linux)


reply via email to

[Prev in Thread] Current Thread [Next in Thread]