l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: notifications


From: Marcus Brinkmann
Subject: Re: notifications
Date: Sun, 10 Oct 2004 01:37:06 +0200
User-agent: Wanderlust/2.10.1 (Watching The Wheels) SEMI/1.14.6 (Maruoka) FLIM/1.14.6 (Marutamachi) APEL/10.6 Emacs/21.3 (i386-pc-linux-gnu) MULE/5.0 (SAKAKI)

At Thu, 07 Oct 2004 22:14:12 +0200,
Marcus Brinkmann wrote:
> 
> At Thu, 07 Oct 2004 19:45:46 +0200,
> Bas Wijnen <address@hidden> wrote:
> > > The server is free to break up the request and reply phase, stash away
> > > the information from the RPC and reply at a later time.
> > 
> > So we are in fact talking about the same thing, only I wouldn't really 
> > call it a reply if it's not done by the worker thread which accepted the 
> > call.
> 
> If it is not done by the worker thread, the worker thread must use
> propagating IPC so that the client thread is frobbed by the kernel to
> listen from the new thread that will eventually send the message that
> you don't want to call a reply, but which is for all imaginable
> purposes indistinguishable from any other reply message.
> 
> Let's called it deferred reply.
> 
> BTW, if you'd actually want to go for such a setup, you need to be
> extra careful and add some additional security measures above what is
> currently in the code.  The reason is that if you propagate the IPC
> and return from the worker thread with ENOREPLY, then the client
> thread in question is, from the perspective of the bucket manager,
> allowed to make another RPC to the manager thread.  The manager thread
> won't know that the client is actually supposed to still be listening
> for deferred replies.  So, whatever you are propagating the message
> to, must do its own check that the same client thread doesn't send
> multiple requests.

On second thought, this requires even a bit more work, because as I
have described it above, and how it is currently implemented, it
breaks cancellation.

To allow for cancellation to work properly, the capability server
library has to be informed about the propagation, and register the
change in its internal data structure.  It needs to associate the
client thread with the receiver of the propagation, ie, the new
"worker thread" (it's not a worker thread of the cap library, but some
other thread doing the work now).

This is in some sense good news, as you don't need to build in another
protection mechanism as I demanded above, it will be a side effect of
just updating the pending_rpcs hash table.

However, it is bad news as I don't know how you could let this new
processing thread know about _which_ RPC is cancelled.  pthread_cancel
doesn't allow to pass any extra information.  I could think of an
extension over pthread_cancel that allows you to pass a reason which
can then be taken from thread local data in the cancelled thread (the
reason here would be the client thread ID, or the rpc context).  Or,
one could store the information in the rpc context, but then you'd
have to walk over the whole list to find the one which is cancelled.

It's nitty details like this which made me focus on simple,
synchronous RPCs, processed solely in the respective worker thread,
for a first implementation.

Thanks,
Marcus






reply via email to

[Prev in Thread] Current Thread [Next in Thread]