[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: vk_l4 -- CVS Setup

From: Farid Hajji
Subject: Re: vk_l4 -- CVS Setup
Date: Tue, 30 Oct 2001 03:50:47 +0100 (CET)

> > sname proved to be a potential bottleneck if used heavily. For this reason,
> > the Hurd decided to distribute the nameserver across the filesystem.
> Are you sure that performance was important for that decision? I
> thought it was more like "We already have one mechanism for
> associating objects with names: The file system. Any extra name server
> is redundant, unneeded complexity, so let's just get rid of it", but I
> don't really know.
I must have confused something. I remember having read a paper comparing
the use of a single nameserver a la sname to the distributed nameserver
scheme used in the Hurd [no, don't ask me about this paper: I've spent
the last couple of days hunting in my printed docs, but I haven't found
it yet]. I don't know wether the Hurd developers anticipated the benchmarks
presented there, or if you're right about reusing the filesystem as
rendez-vous, because it was already there.

> > Basically, glibc is bootstrapped to access '/' (the translator for
> > the root node) directly, then the names are mapped to the filesystem,
> > e.g. like /servers/somename.
> It's not really glibc that is bootstrapped. The normal way of starting
> hurd processes makes sure that it has a few open ports at the start.
> The list includes the process' stdio fd:s, it's current working
> directory, and the filesystem root (which may or may not be the same
> root as all other processes use).
I've actually meant '/', the filesystem root port. This one is needed
in glibc:
  ${GLIBC_SRC}/hurd/hurdlookup.c: __file_name_lookup()
  ${GLIBC_SRC)/hurd/hurdlookup.c: __hurd_file_name_lookup()
So you're right ;-).

> > 1. You must at least bootstrap the Hurd _and_ the root filesystem server,
> >    before you can use nameservice under the Hurd.
> What boot does is bootstrapping the environment in which the initial
> processes are started. I.e. it bootstraps what you call the "hurd name
> service".
${HURD_SRC}/boot/boot.c: main() uses Mach's get_priviledged_ports()
to obtain the priviledged host port and master device port.
  [[ Note that this service is already provided by Mach! ]]

The priviledged host port is then used to obtain a port to the
default pager. Under L4, we'll have to bootstrap the TID of the
default [UVM?] pager into boot. [okay: no glibc bootstrapping here].

The master device port is needed to open any device, including the
pseudo-root [${HURD_SRC}/boot/boot.c: ds_device_open() last line].
Under L4 we'll need to bootstrap the TID of the device task server

> >    vk-l4 pager _and_ vk-l4 superdriver TIDs first. This would be
> >    only possible through a vk-l4 nameserver [or the less than
> >    optimal hard-coding of TIDs] at this stage.
> If it's just these two (or less than, say, five), hardcoding things
> seems reasonable to me. A name server that 1. never handle more than
> five names, 2. handles the same names every time, and 3. is used only
> for bootstrapping, seems overkill. But perhaps you imagine other uses
> for the name server.
The reason I'm suggesting a nameserver instead of hard-coding the TIDs
is simply to ease development (!). Suppose that we've got 2 or more
pagers running concurrently on top of L4. Hurd's boot could e.g. always
request the pager with the highest revision, unless directed otherwise
on the command-line. Besides versioning, I could imagine running to
Hurd's in parallel, one of them using one pager and the other one using
another pager (with e.g. different paging algorithms). All of this
would be also possible with hardcoding different values in boot.c, but
it seems easier to me to use a low-level nameserver here.

Of course, it's not that important. Let's use hard-coded values as a
start. A vk-l4 nameserver could always be added later.

> more than one initial process. I'm not sure what L4 supports, but
> given its minimalism it seems reasonable to have a user-level
> serverboot-like program do the bootstrapping.
L4 spawns off a root-task as first user-level task. This could be
some kind of "meta-serverboot" that will pull off all other tasks,
including, but not limited to
  * the tasks that will make up the vk-l4 environment (pager, mdevicetask,...)
  * OS personalities' "serverboot"s/bootloaders,
Of course, "meta-serverboot" (let's call it this way for a moment)
will need to obtain initial configuration data. Needless to say that
all tasks that are prerequisites to accessing the root file system,
will have to be loaded from memory (via GRUB). This includes the
configuration of the "meta-serverboot" task.

> > It would not be wise to mix layers here by using upcalls from L4 to
> > the Hurd (or something else), just to name one example.
> L4 can use servers that are Hurd processes, if those processes use RPC
> interfaces defined by L4. Examples include pagers and interrupt
> handlers for user-level drivers.
Concerning pagers: you're technically right. Every L4 thread is created
by specifying the user-level pager-thread that L4 should use. This pager-
thread could belong to the Hurd. Of course, the Hurd pager-tasks will
have to be its own pager [;-)] or delegate this to a general purpose
pager that is independent of the Hurd. Many possibilities are present here.

If the user-level Hurd pager task (for L4) uses stores or whatever as
backing storage, we could obtain driver based swap space easily.
But I'm not sure if it would work. Here again, there is a chicken and
egg problem lurking. I'm also not very at ease with mixing layers
like this [Okay, I'm a bit conservative here: nothing prevents you
from hacking].

The interrupt handlers could also be served (or hooked up to) Hurd tasks
that implement the L4 interrupt protocol. The same objections apply here
as well.

The real question concerning interrupt handlers is this: Which entity
will maintain the user-level device driver tasks? If it is OSKit, then
this would belong outside of the Hurd itself. The Hurd would use glue
code to access those user-level driver tasks [be they running on top
of Mach or L4]. If it is someone of the L4 teams, the situation would
be exactly the same as for OSKit. If the Hurd team takes them over,
it should be done in a Hurd-independent way, so that other projects
can use them anyway. Hmmm...

Currently, it looks like the device drivers would be maintained by
OSKit and glued into L4 tasks by L4 developers and into Mach tasks
by Hurd people. Hurd/L4 would use (and probably help maintain)
the L4 driver tasks.

Of course, this is all speculation at the moment ;)

> Taking it a little further, if you want to get network transparent rpc
> into L4 (not that I know if that is realistic), you won't put
> networking inside L4, but define an rpc interface that takes, say, a
> message in some form and an hostname/servicename/URL (possibly in a
> numeric interned form). That rpc would be implemented by some hurd
> process communicating with the hurd pfinet. So you could get network
> transparent rpc and still use Hurd servers for doing the networking,
> dns lookups etc. L4 need only know if the target of an rpc is local or
> remote, and it wouldn't make much sense to put any more knowledge
> about network things into L4 than just the local/remote distinction.
> (if you think some more about it, L4 may not even need that).
I agree with you here. Network transparent RPC will need:
  1. network-wide unique TIDs
  2. a mechanism in L4, to redirect remote-IPC into a local
     TID (somewhat like the chiefs/clan mechanism). This local
     TID could belong to a user-level network task which will
     marshall and forward the IPC to another user-level network
     task somewhere else on the network.

BTW, I wouldn't use pfinet for many reasons here:
  a. pfinet uses a very poor TCP/IP stack. Even if it were to be
     replaced with, say, BSD4.4s IP stack, there will still be
     very much overhead.
  b. pfinet is way too high in the system layer. If will need to
     use lower-level IPC services that _must_ remain local. This
     could be hairy to ensure consistently [no recursive calls to
     pfinet should be allowed].
  b. TCP/IP may be overkill in LANs, especially in tighly coupled
     clusters. For efficient network-wide message switching, you
     may need to avoid IP completely and use something like
     Fast-IP or even use the bare hardware as a special optimization
     in some cases. [Most clusters won't be simple workstations
     networked accross routers, but will mostly be attached to
     ATM- or Giga-Ethernet switches or even directly embedded in
     a cabinet with a very high bandwidth switching backplane].

> /Niels


Farid Hajji -- Unix Systems and Network Admin | Phone: +49-2131-67-555
Broicherdorfstr. 83, D-41564 Kaarst, Germany  | address@hidden
- - - - - - - - - - - - - - - - - - - - - - - + - - - - - - - - - - - -
One OS To Rule Them All And In The Darkness Bind Them... --Bill Gates.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]