[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: capability address space and virtualizing objects
Neal H. Walfield
Re: capability address space and virtualizing objects
Fri, 29 Aug 2008 15:05:50 +0200
Wanderlust/2.14.0 (Africa) SEMI/1.14.6 (Maruoka) FLIM/1.14.8 (Shijō) APEL/10.6 Emacs/21.4 (i486-pc-linux-gnu) MULE/5.0 (SAKAKI)
At Thu, 28 Aug 2008 11:48:56 -0400,
Jonathan S. Shapiro wrote:
> > From: Neal H. Walfield <address@hidden>
> > I'm currently working on IPC in Viengoos. I've decided to mostly
> > divorce IPC from threads by reifying message buffers. Thus, instead
> > of a thread sending a message to another thread, a thread loads a
> > message into a kernel message buffer and invokes another message
> > buffer specifying the first as an argument.
> Interesting. Out of curiosity:
> 1. Are the buffers bounded in size?
> 2. Who allocates their storage?
They are first class objects allocated out of activities (think space
> 3. Are message boundaries preserved?
I'm not sure what this means.
> Also, have you concluded that the double copy cost associated with
> buffering is acceptable?
A message buffer contains a capability slot identifying a data page
(i.e., that can also be made accessible in the hardware address
space). The data page contains a small header consisting of the
number of caps and the number of bytes in the message payload. The
remainder is the message payload. First there is an array of
capability addresses, which the kernel looks up and copies to the
message recipient. Following that is an array of bytes. The user
message buffer looks like this:
The user message buffer is only examined when the message is actually
transferred to the target. Message transfer occurs as follows:
- the kernel revokes the frame from the source user message buffer
object (the next access of the source user message buffer will
allocate a fresh frame),
- the kernel finds the first MIN(source.cap_count, target.cap_count)
capabilities specified in the source message buffer and copies
them into the slots specified in the target message buffer,
- the kernel copies the MIN(source.cap_count, target.cap_count)
capability addresses from the target message to the source message
- the kernel clears the target.cap_count - MIN(source.cap_count,
target.cap_count) capability address varies in the source message
- the kernel frees the frame associated with the target user message
buffer object and assigns it the frame that was associated with
the source user message buffer object.
Thus, the data is not copied in the kernel.
> > A message buffer contains a capability slot designating
> > a thread to optionally activate when a message transfer occurs.
> I am not clear what "optionally activate" means here. If it is important
> to the question that you are trying to ask, then could you clarify?
An activation on message delivery is often not required. Consider a
typical RPC: a client sends a message to a server and gets a reply.
If the client gets a reply, then the message that it sent must have
been delivered. Thus, the client does not require a delivery
> > When the message in SRC is delivered to DEST, the thread designated by
> > SRC is activated, indicating that the message in SRC has been
> > delivered, and the thread designated by DEST is activated indicating
> > that a message in DEST has arrived.
> Ah. So what you mean to say is not that the activation is optional, but
> that the presence of a thread capability in the buffer is optional?
The thread capability is also required for looking up
capabilities/capability slots. If no capabilities need to be
transferred, then no thread object is required and the current
prototype will handle this scenario.
> If so, I would suggest a change of terms. What you are describing as
> "buffers" have traditionally been called ports or mailboxes. Generally,
> a buffer holds payload, while the thing it is queued on is a port,
> queue, or mailbox.
A kernel message buffer can be queued on another message buffer. (A
message buffer contains a head and node pointers.) The page of
payload is associated with the message buffer.
Do you still think it should be called a port? Is there some other
> > This interface poses a problem for virtualization. One of the goals
> > of Viengoos is that all interfaces be virtualizable. This has (so
> > far) included the ability to fully virtualize kernel objects.
> > Virtualizing an object is done by way of a message buffer, on which
> > the same interface is implemented as the object that is being
> > virtualized.
> > This means that to virtualize a cappage...
> Initially I thought that you were concerned with virtualizing
> buffers/mailboxes, but now you seem to be speaking about virtualizing
> cappages. I will proceed on the assumption that your goal is to
> virtualize cappages. If I have misunderstood, please clarify.
I want to virtualize everything... My litmus test so far was
cappages. I had initially thought that virtualizing buffers/mailboxes
would be easy but now that I think about it, that is not the case: the
operations that manipulate a buffer/mailbox also need to be
> There is a situation in Coyotos that may be analogous: sender sends a
> message, is willing to block for delivery, but receiver buffer contains
> invalid pages. Appropriate keeper must be notified, but kernel will not
> hold any storage.
> In the Coyotos case, what we do is roll the transmission back (in an
> unbounded message system we could leave the two processes in
> mid-transfer). The kernel up-calls the handler, attributing the call to
> the sender (equally well, the receiver). The handler, on reply, restarts
> the alleged sender, thereby resuming the message transfer.
This is the direction that I have been thinking about. And, if I
understand the details right, essentially what I (tried to) propose in
> Now the problem that you face in managing mailboxes is not quite
> analogous. Ultimately, the problem you are really dealing with is that
> you cannot use the communication substrate primitives to simulate
> themselves. There is a reductio problem.
> It appears to me that there are (qualitatively) only two solutions to
> this reductio:
> 1. Define the messaging architecture in such a way that the transient
> message body can be elided in some cases, and ensure that the
> traversal reductio can be implemented entirely within these cases.
> In particular, kernel-implemented objects such as cappages are
> invariably very simple, and you may be able to exploit the fact
> that all of the required operations for this object are both
> unit-time and involve very small messages.
> 2. Define the messaging queues as a *cache* backed by the respective
> applications, and design the traversal solution in such a way that
> the caches involved in the traversal are likely to converge.
This is indeed the underlying problem, I think. Your explanation is
quite helpful, although I need to think about your proposal more,
Re: capability address space and virtualizing objects, Neal H. Walfield, 2008/08/28