l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: notifications


From: Marcus Brinkmann
Subject: Re: notifications
Date: Thu, 07 Oct 2004 22:14:12 +0200
User-agent: Wanderlust/2.10.1 (Watching The Wheels) SEMI/1.14.6 (Maruoka) FLIM/1.14.6 (Marutamachi) APEL/10.6 Emacs/21.3 (i386-pc-linux-gnu) MULE/5.0 (SAKAKI)

At Thu, 07 Oct 2004 19:45:46 +0200,
Bas Wijnen <address@hidden> wrote:
> > The server is free to break up the request and reply phase, stash away
> > the information from the RPC and reply at a later time.
> 
> So we are in fact talking about the same thing, only I wouldn't really 
> call it a reply if it's not done by the worker thread which accepted the 
> call.

If it is not done by the worker thread, the worker thread must use
propagating IPC so that the client thread is frobbed by the kernel to
listen from the new thread that will eventually send the message that
you don't want to call a reply, but which is for all imaginable
purposes indistinguishable from any other reply message.

Let's called it deferred reply.

BTW, if you'd actually want to go for such a setup, you need to be
extra careful and add some additional security measures above what is
currently in the code.  The reason is that if you propagate the IPC
and return from the worker thread with ENOREPLY, then the client
thread in question is, from the perspective of the bucket manager,
allowed to make another RPC to the manager thread.  The manager thread
won't know that the client is actually supposed to still be listening
for deferred replies.  So, whatever you are propagating the message
to, must do its own check that the same client thread doesn't send
multiple requests.

It's not a showstopper, but it has to be done.  As every task will
want to watch out for task death notifications, it would be the
correct implementation to have a single "task death notification
distributor thread" in the task server, to which all such requests are
propagated.  I think this is the correct implementation.

As for how to do this, quite simple: The
task-death-notification-distributor-thread needs to keep a hash table
that maps client thread IDs to the pending rpc contexts.  In fact, it
may be reasonable to restrict it to one such operation per task at any
time, and just map the task ID to the pending rpc context, and return
an error if there already is such an item registered for the sender.

Should we do what you proposed to have other threads send messages to
task if they want to add a task ID to be watched for death?  It seems
to make sense to me, and is likely the most straightforward
implementation on both sides.  What I said earlier about cancelling
the blocking thread on the client side and later resuming the
operation seems to only add disadvantages (in this case).

I am also getting more confident that full blown "task info"
capability items for watching task deaths are superfluous, and that a
simpler implementation is feasible (just declaring which tasks you
want by listing their IDs as numbers).  Nils Moeller at some time
suggested that you should only be able to use task info caps that you
get from someone else (you couldn't create them from a random task ID
yourself), but that adds a lot of communication overhead, and doesn't
seem to add much protection to the overall system at all.

> > However, this
> > is currently unsupported (it requires a bit of extra work because the
> > L4 kernel needs to know about it when the replying thread changes via
> > ipc propagation).
> 
> I thought they all replied with VirtualSender set to the manager thread? 
>   I don't see any problems in that case, but perhaps we're not talking 
> about the same thing.

You misunderstood what VirtualSender is and how propagation works
(it's a bit confusing, I admit - there is actually a bug in
bucket-manage-mt in that it sets the propagator flag for reply
messages, which is just bogus).

Thread A sends message to B, and waits for B.

B sets VirtualSender to A, and propagates the message to C (via local IPC).
  -> The propagation flag indicates to C that this is a propagated IPC.
  -> The ActualSender for C will be B.
  -> The Sender for C will be A.
  -> The state of A is modified so that it now waits for C, not B.
     (Only if A is in a closed wait, if it is in an open wait, well,
     nothing to do).

C replies to A just via normal IPC.  No propagation.

In the L4 spec, IPC system call section, thread A is called
"originator thread", B is called sender, C receiver.

This is how it works already (libhurd-cap-server/bucket-manage-mt.c,
line 849ff), when the manager propagates the message to the worker thread.

As you can use local IPC, this is very fast (can be faster than a
function call, if you happen to know that the message is already
marshalled and in registers).  The server-side "task death manager
thread" can receive from all local threads, and would have an
additional interface to wake it up on task deaths (so it can actually
go and send the notifications).

There should be some support functions in libhurd-cap-server to
support such a manager thread that "pools" deferred replies.  But
there aren't.  It's something I have thought about, though, and it
should be not too hard to add it (the basic support, returning
ECAP_NOREPLY, as it is currently called, is already in there, exactly
for this scenario).

Thanks,
Marcus





reply via email to

[Prev in Thread] Current Thread [Next in Thread]