l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: synchronous RPCs vs. asynchronous RPCs


From: Marcus Brinkmann
Subject: Re: synchronous RPCs vs. asynchronous RPCs
Date: Fri, 5 Sep 2003 16:10:13 +0200
User-agent: Mutt/1.5.4i

On Fri, Sep 05, 2003 at 07:09:31AM -0400, Roland McGrath wrote:
> > Heretic claim of the day:
> > 
> > "We should only do synchronous RPCs, and implement all asynchronous RPCs as
> >  synchronous RPCs performed by a helper thread."
> 
> Uh, I thought this was fundamental L4 dogma already.

L4 does synchronous IPC, in that a send or receive only happens when both,
sender and receiver are in the system call.  What I am meaning is
synchronous RPC, in that the client is expected to block until both the send
and receive phase are finished.

> > This applies to select()/poll(), asynchronous msync() and other operations
> > like that.
> 
> msync with MS_ASYNC is just like e.g. file_sync with WAIT==0: it's a
> synchronous RPC that requests an asynchronous action.  It's not appropriate
> to implement those things with asynchronous client use of synchronous RPC
> interfaces.  In those operations, there is a synchronous phase during which
> errors can be diagnosed, and then asynchronous server work with no
> completion notification to the client.

Neal and me have gone a great length in our redesign to allow proper
resource tracking, and to prevent any type of denial of service attacks.

Asynchronous operations are problematic, because they let the server do
something while the client is "out of the picture", and can not be held
responsible for the cost anymore.  Depending on the operation, this might or
might not be a problem, but usually it is.

msync() is a good example.  The way mmap() will work is that the user
adds an entry to the pager thread's table of mappings.  Then, at page fault,
the pager will create a container, and use it to read the data from the
filesystem.  The filesystem will fill the container with the right pages,
and then returns.  The user can then map the pages.

It is important to note that the filesystem is out of the picture as far as
the client is concerned when the pages have been read.  From there on, the
client can assume that the pages are his, and he can do with them what he
wants.  Of course, we have to do some special tricks to properly share the
pages (they are marked in a special way to allow the filesystem to reuse
themf or other clients if the need arises).  I will go into more detail of
this VM system very soon.

For msync(), the user has to write back the pages explicitely.  For this,
the user has to send the container to the filesystem again.  Now, what would
happen if we allow the operation do be asynchronous?  Then the filesystem
would have to get his own hard reference to the pages until the sync
completed.  But this means that the filesystem must pay for the memory, ie,
these client pages count to its own physical page usage.  This would be a
simple to exploit DoS attack on the server.

This description probably raises more questions than it answers, but I hope
so much it makes clear: The server should not be required to pay for client
resources like memory pages for file data, even temporarily.  With the
container model, we can avoid that, and evend do nifty things like zero
copying from the device driver to the user directly.

> > I don't think it should apply to notifications like task death
> > notifications (although... I am still considering it!  If we can restrict
> > it to task death notifications, we only need one thread.  Object death
> > notifications for other capabilities can (from the servers point of view)
> > safely deferred until the client actually tries to use the capability the
> > next time - I have to check the Hurd code if other object death
> > notifications are needed).
> 
> Haven't we covered this before?  Long ago, in specifying the constraints of
> what the Hurd needs from an underlying IPC system/object model we made it
> very clear that we only need no-senders notifications for object
> implementors (servers) to promptly know they have lost all attached
> clients, and some form of timely task death handling that the proc server
> (or moral equivalent) either uses or implements if it's in sufficient
> control for that.  We don't in general make use of dead-name notifications,
> which are the general kind of object death notification Mach provides and
> what serves as task death notification.  In the places we do, it's to serve
> some particular quirky need (and mostly those are side effects of Mach's
> decouplable RPCs) and not a semantic model we insist on having.

Ah, good.  We covered his before, and I had a suspicion that dead-name
notifications are not important, but I wasn't sure.  This makes things
indeed much better for us.

> > * An RPC consists of a send and a reply phase.  The only way for a client
> >   to make sure it will reliably receive the reply is to go from the send
> >   to the receive operation atomically _and block_.  The server just can
> >   sensibly assume that if the client doesn't care about the reply, it
> >   might be malicious and not sincere.
> 
> The latter presumption has some problems.  Contrarily, blocking for a reply
> exposes a client to potential malice or error from a server.

A client trusts its server.  If it doesn't, it can not make any blocking
RPC, which is clearly not what you want.  You have still the possibility to
use a timeout or interrupt the thread, if you are that paranoid about using
the server.  But then you can not make reliable use of the server's
services.

> In Mach, you
> can use a simpleroutine and know that you will block until the server has
> had a chance to get the message without possibility of dropping but not
> block waiting for it to dick you around.  Some of the Hurd's use of
> simpleroutines relies on this security feature, and that would all have to
> be checked and thought through carefully.

I will certainly look out for such situations.  However, the robustness and
security of the server has a higher priority than that of the client.

> > * Consequence:  This allows for a simple way to aviod DoS attacks on
> 
> This one is pretty meaningless to me.  You can always create more threads.
> Bottom line, this is just about resource constraint and you have to address
> that in a more direct and general way rather than relying on quirky
> semantic limitations.

Creating threads is a privileged operation, the task server will control the
number of threads you allocate withthe help of the task policy server.  In
other words:  It will be easy to set and enforce a quota on the number of
threads.

I agree that my limitation appears to be quirky, but I think it is just the
extreme case of "throtteling" or whatever other type of limit you might
think about.  After all, you must address this simple DoS attack:

  while (1)
    simple_rpc (...);

and
  while (1)
    normal_rpc_with_just_the_send_phase_skip_receive_phase (...);

which both will hammer the server with RPC requests.  My idea is to limit
the number of RPCs one thread can make at the same time to 1.  I have yet to
look at the simple routines we could make use of (or even require) in
detail: If they are really important, the server could exempt the user from
the limit for simple routines made.

> > * Good resource tracking: The client pays all costs associated with the RPC.
> >   This is not obvious, so here is an example: an asynchronous msync means
> >   that the client continues to run after the msync() invocation.  But that
> >   means that the filesystem would have to pay for a copy of the synced page
> >   until the page is actually written back.  This opens a DoS attack.  By
> >   requiring the client to provide the page in a container with a synchronous
> >   msync(), the filesystem does not need to pay for a copy and can work on 
> > the
> >   client's resources.
> 
> You are thinking about the resource tracking the wrong way.  The pages and
> backing store are resources associated with the client by the mapping.
> msync calls do not affect these resource allocations or enforcement.

Please see above for a more detailed explanation of msync() and the virtual
memory management.  The filesystem does not need to allocate any pages for
file data in the normal case, irregardless of the user operation (read,
write, map).  However, if msync() were allowed to happen asynchronously,
then the server would have to allocate pages for the user's file data just to
keep it alive.

Now it is really time to write more documentation on this memory management
stuff :)

Thanks,
Marcus


-- 
`Rhubarb is no Egyptian god.' GNU      http://www.gnu.org    address@hidden
Marcus Brinkmann              The Hurd http://www.gnu.org/software/hurd/
address@hidden
http://www.marcus-brinkmann.de/




reply via email to

[Prev in Thread] Current Thread [Next in Thread]