[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: auth handshake and rendevouz objects

From: Niels Möller
Subject: Re: auth handshake and rendevouz objects
Date: 06 Nov 2002 09:42:00 +0100
User-agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.2

address@hidden (Neal H. Walfield) writes:

> How can a dozen threads send rpcs at the same time?!?  We simulate
> concurrency and even in SMP machines, there will be locks.

Say you have two cpu:s, and one thread on each cpu tries to send a
message to the same receiver thread at the same time. I'd expect one
of the rpc:s to succeed and the other to timeout. The receving thread
would get cputime on the same cpu as the thread sending the successful
rpc (as the sending thread won't use it anyway why the rpc is in

Hmm, all this is about a zero timeout for the send phase of the rpc,
then we have a separate timeout for the receive phase, which I'm
afraid can't be zero in our case.

> > Would it help to have a single one-message buffer per-receiver (in our
> > case, A), in S, and a corresponding thread? When B asks for one of A:s
> > handles, it will block until the server's buffer is empty. Then the
> > server thread will receive the message from B, and it can block while
> > delivering it to A (using the same timeout as B used when calling
> > S).
> The point is that the server is not suppose to block.  Doesn't this
> algorithm defeat that?

The server thread that is responsible for handle transfers from a
particular client, A, will block if A blocks. The rest of the server
process will run as normal. That means that the process B that tries
to get a handle from A will block or timeout when it talks to S. But
that's no problem, I think, because B would block or timeout in the
same way if it talked to A directly, so S isn't degrading service in
any way.

> > * A number of threads that is linear in some potentially pretty large
> >   parameter, like number of open files, number of clients, etc. I
> >   don't know if this is a problem, it seems to be a basic assumption
> >   in the hurd design that threads are cheap.
> That is not what is currently done.  There is one thread per
> outstanding rpc.  Or are you suggesting something else?

A was thinking primarily about the servers, my impression was that the
current hurd server code simply creates one new thread for each client
that for example have an open file on the server. But I may be wrong.

> This would be horrible and I cannot think of anyway around it.  The
> handle thread must do an open wait and as such the possibility exists
> that its thread id will be guessed by a rogue process.

All threads *must* be able to copy with occational invalid messages.
The problem is how to prevent it from being flooded with invalid
messages, like a denial of service attack. I think it should be
possible to limit the damage, by counting the cpu resources used by a
task as the actual cpu usage + a constant times the number of rpc:s
initiated by the task. The constant should be a few times larger than
the time it takes for a receiving thread to check whether or not a
received message corresponds to a valid handle.

Then one can do various tricks in the scheduling. Besides usual
adaptive priorities (like in unix), one could enforce that the cpu
resources used be any task (including the rpc time added above) must
be less than 100%. The effect would be that a single thread doing

  for (;;)

will consume all left over cpu and get the system load up to 100%, and
be scheduled with a low priority. But a thread doing

  for (;;)

will *not* be able to get all left over cpu, leaving enough resources
for the target thread to discard the messages without getting

Does that make sense?

One may also want to keep per user cpu resource counts, so that a
dozen of processes, owned by the same user, and doing their best to
waste their own and other's cpu time, will not be able to do more
damage than a single bad process.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]