l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Getting Started with Hurd-L4


From: Espen Skoglund
Subject: Re: Getting Started with Hurd-L4
Date: Tue, 26 Oct 2004 13:14:40 +0200

[Marcus Brinkmann]
>>>> If you do an IPC, do you not donate the rest of your time slice
>>>> to the receiving process (assuming you don't block).  (Hence the
>>>> scheduler is not invoked.)
>> 
>> Just out of interest, do you know who came up with this idea?  It's
>> beautifully simple, and I'm sure I've seen it around before but
>> can't remember where!

> It's part of the L4 implementation.  Maybe look into research papers
> that discuss IPC models that buld upon the "migrating thread" idea
> (synchronous IPC).

I believe some more words on this is mentioned in the SOSP'93
"Improving IPC by Kernel Desgin" paper.

In short, we don't want to have the overhead of doing processor time
accounting on each IPC operation.  We don't even want to modify the
blocked/ready queues since this implies touching more TLB and cache
entries (thread queues are doubly linked lists, and each
insert/removal thus requires touching 3 separate TCBs).  The solution
that we use is to enqueue/dequeue lazily.

You're right in that the spec does not say anything about time slice
donation on IPCs.  Most L4 implementations do perform timeslice
donation, though.  We are currently having some issues with the
current timeslice donation scheme, and are investigating possible
solutions.  The issues are mainly related to ensuring that a thread
can be guaranteed not to run (not even on a borrowed timeslice), and
wether this really matters.

>>> That seems to be true.  However, like for ThreadSwitch, I'd expect
>>> this not to be done if the two threads reside on different
>>> processors.
>> 
>> Sounds good.  What happens in that case?  The L4 manual is a pretty
>> big, ominous looking thing!

> The remainder of the time slice is lost, I think, and some other
> thread will be scheduled (the next thread in the run queue of the
> highest priority with any threads in it).

> I am not sure this is really explained in the spec.  It's more or
> less an implementation detail.  But it's the only thing that makes
> sense to do (the only thing I am not sure about is if the remainder
> of the time slice is lost, ie, if the timeslice is reinitialized,
> and if the remainder is added back to the total quantum of the
> thread).

The semantics of ThreadSwitch is explained in Section 3.4, and works
as you've described it.  A new scheduling decision is made, and the
timeslice of the thread is renewed.

>> The L4 model may want rethinking with the newer multi-core CPUs
>> that we're seeing these days though.  That combined with NUMA
>> architecture is pushing things towards building some sort of
>> hierarchy with the cost of switching CPUs dependant on how far up
>> the hierarchy you have to navigate to the other CPU (not sure what
>> the technical term for this is, it's not a straight distance thing
>> and (as far as I can remember) only works for tree structures).

> You have to do this yourself, in your own scheduler implementation.
> L4 only provides the mechanism to change the CPU a thread runs on,
> and not which thread should migrate to which CPU at which time.

Our groupe is pretty actively investigating MP systems (including NUMA
and multi-core CPUs) and how to build kernels and systems on top of
it.

> BTW, the L4 guys plan to use hyper-threading for faster IPC, I
> think, and not to emulate an SMP machine (or at least that's an
> option).  There is a talk about this by some Intel guy, but I didn't
> find any papers.

I know that Sebastian Schoenberg (and probably others) have proposed
something like this.  We have not evaluated the potential benefits (if
any) of such an approach though.
 
        eSk




reply via email to

[Prev in Thread] Current Thread [Next in Thread]