[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: capability address space and virtualizing objects
Neal H. Walfield
Re: capability address space and virtualizing objects
Fri, 29 Aug 2008 17:01:50 +0200
Wanderlust/2.14.0 (Africa) SEMI/1.14.6 (Maruoka) FLIM/1.14.8 (Shijō) APEL/10.6 Emacs/21.4 (i486-pc-linux-gnu) MULE/5.0 (SAKAKI)
At Fri, 29 Aug 2008 09:47:41 -0400,
Jonathan S. Shapiro wrote:
> On Fri, 2008-08-29 at 15:05 +0200, Neal H. Walfield wrote:
> > At Thu, 28 Aug 2008 11:48:56 -0400,
> > Jonathan S. Shapiro wrote:
> > > 2. Who allocates [the buffer] storage?
> > They are first class objects allocated out of activities (think space
> > banks).
> Is this a persistent system?
Originally, it was going to be persistent, however, some discussions
with Marcus last year convinced me otherwise.
> > A message buffer contains a capability slot identifying a data page
> > (i.e., that can also be made accessible in the hardware address
> > space). The data page contains a small header consisting of the
> > number of caps and the number of bytes in the message payload. The
> > remainder is the message payload. First there is an array of
> > capability addresses, which the kernel looks up and copies to the
> > message recipient.
> So if I understand this, the payload that is actually enqueued is an
> (address space, pointer) pair, where the pointer points to a message
> descriptor that resides in sender space.
> If there is no means for "small
> messages", this will offer problematic performance, but I definitely see
> attractions in having this type of mechanism. Charlie Landau suggested
> the same approach for unbounded messages in EROS and Coyotos several
> years ago.
> But even so, that (address space, pointer) pair occupies storage in the
> receive queue. Would it be correct to infer that the in-kernel message
> structure is first-class?
Here's what the kernel "message buffer" current looks like:
/* Thread to activate. */
struct cap thread;
/* Root of the address space for resolving capability addresses. */
struct cap as_root;
/* A user buffer. */
struct cap payload;
/* Whether deliver is blocked. */
/* Copied at send invocation start. */
struct cap sender_activity;
/* Buffer's waiting to deliver a message to this buffer. */
/* Buffer's waiting to receive a message from this buffer. */
The enqueue interface includes an activity that should be charged for
the resources consumed by message delivery and the service (for
simplicity, I didn't mention this in my last note). This is saved in
sender_activity. The protected payload that is in the invoked
capability is saved in protected_payload. Both are delivered to the
> > The user message buffer is only examined when the message is actually
> > transferred to the target. Message transfer occurs as follows:
> > - the kernel revokes the frame from the source user message buffer
> > object (the next access of the source user message buffer will
> > allocate a fresh frame),
> I can see no reason why this revocation should be required. None of the
> content that you describe as existing in this frame is in any way
> sensitive, and there is no hazard to the kernel if the sender alters the
> payload on the fly, provided minimal care is taken in kernel accesses to
> the frame.
Isn't exposing the capability addresses in the target message
> The more serious concern -- and only if Viengoos supports this -- is
> explicit revocation of the frame in mid-transfer.
Frames are second class in Viengoos.
> > - the kernel finds the first MIN(source.cap_count, target.cap_count)
> > capabilities specified in the source message buffer and copies
> > them into the slots specified in the target message buffer,
> Unless there is a very small bound on cap_count, this phase needs to be
It's bounded in that there is only a page worth of space. I have not
yet decided whether to further restrict this. But this is a
> > - the kernel copies the MIN(source.cap_count, target.cap_count)
> > capability addresses from the target message to the source message
> > buffer,
> I must not be reading this correctly. Why would it be appropriate for
> the kernel to disclose to the sender the addresses in the *target*
> address space to which the capabilities were transferred, especially if
> they will immediately be cleared:
The sender no longer has access to the frame.
> > - the kernel clears the target.cap_count - MIN(source.cap_count,
> > target.cap_count) capability address varies in the source message
> > buffer, and
> even if this clearing is quick, there is an incorrect temporary exposure
> of target information in your description.
The sender no longer has access to the page; the target gets access in
the next step.
> > - the kernel frees the frame associated with the target user message
> > buffer object and assigns it the frame that was associated with
> > the source user message buffer object.
> Somewhere in all this I am reasonably certain that a data payload gets
> copied, but that description seems to have gone missing.
The source frame is modified and transferred. The bytes are not
> I would not have expected the old target frame to be freed. Given the
> road you seemed to be proceeding down, I anticipated that the protocol
> would clear the target frame and then execute a frame exchange.
Why would it clear the target frame? Do you mean to appropriately
account the CPU cycles since it has to be cleared eventually?
A frame exchange would also be possible but as frames are second
class, that seems to me to just be an optimization of some sort. Or
is there another reason that I am missing?
> > > > A message buffer contains a capability slot designating
> > > > a thread to optionally activate when a message transfer occurs.
> > >
> > > I am not clear what "optionally activate" means here. If it is important
> > > to the question that you are trying to ask, then could you clarify?
> > An activation on message delivery is often not required. Consider a
> > typical RPC: a client sends a message to a server and gets a reply.
> > If the client gets a reply, then the message that it sent must have
> > been delivered. Thus, the client does not require a delivery
> > notification.
> Then I misunderstood completely. I do not understand how either the
> server or the client are activated on message delivery. From the initial
> description, I had thought that the purpose of the thread capability was
> to notify the recipient that a message existed to be processed.
There are two message buffers: the one that had the message (source)
and the one that received the message (target). The client queues
source on target. If target is block, the client (optionally)
continues executing. When the message is finally delivered from
source to target (when target becomes unblocked), both source->thread
and target->thread can be activated.
> > > Ah. So what you mean to say is not that the activation is optional, but
> > > that the presence of a thread capability in the buffer is optional?
> > The thread capability is also required for looking up
> > capabilities/capability slots.
> Not so. As we have demonstrated in Coyotos, data pages and capability
> pages can be mapped within a single address space. Something must name
> the address space, and the thread capability is a reasonable choice, but
> if the address space is first class then it could be named directly.
You're right. I've the kernel "message buffer" object to include a
slot for the address space root.
> I definitely think the name needs to change. When people here the term
> "buffer", what leaps to mind is "some resource that contains the payload
> of a message". They definitely do not think "a thing on which a message
> can be enqueued", and I cannot envision a scenario in which it makes
> sense to enqueue one piece of payload on a second piece of payload. I
> can envision useful scenarios in which queues might be first class and
> capabilities to them might be transferred, but I cannot envision a
> scenario in which a queue should get enqueued on another queue.
> The concepts of "the message being transferred" and "the destination of
> transfer" seem (to me) to want to be clearly separated. If there is a
> reason not to do this, I would be interested to understand it, but
> offhand I can see only complications and confusions arising from what
> you seem to be describing.
Do you mean to have two kernel object types instead of one? One for
messages buffers and one for queues?
Re: capability address space and virtualizing objects, Neal H. Walfield, 2008/08/28