l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: task server


From: Marcus Brinkmann
Subject: Re: task server
Date: Wed, 06 Oct 2004 22:22:30 +0200
User-agent: Wanderlust/2.10.1 (Watching The Wheels) SEMI/1.14.6 (Maruoka) FLIM/1.14.6 (Marutamachi) APEL/10.6 Emacs/21.3 (i386-pc-linux-gnu) MULE/5.0 (SAKAKI)

At Mon, 9 Aug 2004 22:34:34 +0200,
Bas Wijnen <address@hidden> wrote:
>   1. It doesn't compile, because I get undefined references to {malloc, free,
>      calloc, etc} from the capability server.  I guess this should be fixed by
>      a libc-like lib which calls physmem.  This means physmem should serve
>      requests before it receives its task server capability, by the way.

This is true.  The question is what to use.  wortel, physmem, task,
and deva will all be "handicapped" to some extend.  wortel needs to
implement everything itself, physmem has only wortel to build on
initially, and later gets task and deva.  task has physmem and wortel,
and later deva.  deva has an almost fully functional environment.  But
for all of these it is true that they can't really run on dynamically
paged memory.  That's just asking for trouble.  So, we have to be conservative.

I think the best might be to run on an optimized "root server
library".  Parts of that library are already in hurd-l4: libl4,
libc-parts, physmem/malloc*, wortel/startup* and libpthread are all
part of such a library.  However, I am kinda hesitant to go all the
way, as it basically means to write and maintain a small reduced
version of the C library.  But we may have to do it.

Physmem already _does_ server requests before it receives its task
server cap.  In fact, the current dummy task server in CVS runs on
memory mapped in from physmem via the "dummy" container cap.  See
wortel/startup.c (physmem_map).  You can also see it in
physmem/physmem.c main(), where the manager thread is started before
calling get_task_cap().

You can also see some band-aids in the peculiar use of pthread to
avoid contacting the task server before it is there.  There is a
reason why wortel donates exactly three extra threads to physmem :)
(the manager thread, and two alternating worker threads).

>   3. Because I haven't signed any copyright transfer forms yet, the copright
>      is still mine.  I license the included code under GPL version 2, as it
>      says.  Of course the FSF can license it however they like when the
>      transfer is done.  Can someone tell me how to get those papers?

You contact address@hidden, I think.  For continued development,
signing future changes would be a good idea.

>   4. It currently uses static tables of threads and tasks, limiting them to
>      arbitrary sizes.  This is not acceptable for a final version.

In fact, I am afraid to tell you that most of the work is in the data
structures for efficient operation. ;) But it's good in any case to
become familiar with how to write servers.
 
> The tables of threads and tasks are used for most operations.  In any case
> where a task info capability is present, the capability object will point to 
> the
> task's memory, making it irrelevant how it is stored (in terms of speed.)

You are "misunderestimating" (sorry for the pun) libhurd-cap-server.
It is already doing all the work for you in maintaining and managing
memory for capability objects.  This is the whole purpose of
libhurd-slab, which allocates memory in pages for libhurd-cap-servers
use.  Released objects are efficiently cached and reused - to avoid
superfluous initialization, constructors and destructors are used.
This is all documented in the header file of libhurd-cap-server, at
least to some reasonable extent.  You can see how it is used in
physmem/container.c, which shows how to create a cap class and receive
RPCs on it (together with physmem's main() of course).

How to organize the _relationship_ between task caps, task info caps
and thread allocation is an open question even to me.  It's something
I want to ponder next, and it is somewhat crucial.  However, for the
actual cap objects, you don't need to do much.  It's all automagic.
 
> The
> only operation where the task info capability is not present, is "get the task
> info capability."  This operation is used a lot, so it should be fast.

Well, maybe.  A general side-note: No single RPC must be used a lot.
If you use an RPC a lot, you are doing something fundamentally wrong
(I learned this when writing the Hurd console, where I used RPCs to
send the console data - it was quite slow).  Of course, certain
operations must still be fast, but usually for different reasons, for
example to avoid latency with concurrent operations.

As a rule: If you make a server RPC, you are already on the slow side.
It's not a reason to make any individual server RPC slow, or even the
whole RPC mechanism (quite the opposite: as RPCs as a whole are called
often, they must be as fast as possible!).  But you have to look
carefully at which operations are really needed a lot and which
aren't, and you should look for ways to reduce the number of RPCs if
they are a problem.

When is "get the task info cap" called?  Whenever a new connection is
established between a server and a client.  However, this happens
once, and then for all further caps and RPCs between those two
partners you don't need to pay attention to the task info cap at all.
So, let's say you lookup a file over a mount point.  Then for the
lookup, you negotiate the cap, and thus request a task info cap.  But
to actually open the file, you already have the task info cap, so you
don't need to do anything.  These caps can be cached for a while to
avoid requesting a new object a zillion of times if you happen to have
a lot of individual lookups over that mount point (and no other file
handle to the filesystem open).

In fact, thinking about it, maybe the whole concept of task info
capabilities is wrong.  Maybe it should just be a single object, and
you can use RPCs to mark and unmark certain task IDs you are
interested in watching for task death.  This would be a much more
efficient implementation.  In fact, this makes a lot of sense to me.
"Requesting task info cap" was for me always a place-filler for the
notion of somehow starting to declare your intent to watch out for a
certain task ID, and avoid that it is reused until you undeclare your
intent (and get somehow informed about task deaths).

Needs more thinking. :)

There is more to say, later.

Thanks,
Marcus





reply via email to

[Prev in Thread] Current Thread [Next in Thread]