[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


From: olafBuddenhagen
Subject: Drivers
Date: Sun, 1 May 2005 16:11:08 +0200
User-agent: Mutt/1.5.9i


As you all know, our "official" proposal for hardware drivers is based
on a generic (portable) device driver framework for L4 based systems
(fabrica), and a low-level Hurd server (deva) to create the
Hurd-specific environment and interface for the framework.

However, I believe we gain nothing by working with a (somewhat) portable
framework. (If we really want portable drivers, we should probably go
the whole way and aim at UDI...) Portability only makes things more
complicated, rising the entry barrier and developement costs, without
any relevant benefit for us. IMHO, we should go for a native hurdish
framework instead, with full integration and all the benefits from using
Hurd-specific facilities. I'm sure this gives us many advantages: For
those creating the framework itself; for the driver developers; as well
as for the users/admins/system vendors using the framework in the end.

Thus I'm putting forward a different proposal, which is partially based
on the original one (many thanks to the authors for the valuable work,
without which I'd be totally lost), but introduces a completely
different method for integrating the drivers in the system.

I've already got some initial feedback (on IRC); so I hope the greatest
shortcomings are fixed now, and it's ready for exposal to stronger
scrutinity here. However, as I never worked on drivers or on Hurd/L4
before -- I am working on this proposal now because I believe there are
great advantages from doing it the way I'm suggesting, especially from a
user's point of view -- there are still many issues open, and I'm sure
there are also still errors and omissions in other parts as well. So if
you have any comments, suggentions, corrections, additions, whatever: Go
ahead! Any feedback is highly appreciated. (Except for negative
opinions, of course ;-) )

Now here we go:

  POSIX Level Drivers


In "traditional" monolithical kernels (Linux, BSD), hardware drivers (like many
other things) reside in the kernel. This is simple, but there are a couple of
drawbacks: Most notably a single malfunctioning driver running with kernel
privileges, can do any amount of harm to the whole system. (Data loss, crashes,
security problems.)

Thus, it is desirable to have the drivers in userspace instead, with strict
protection domains (separate address spaces etc.) -- a driver shouldn't have
access to anything it doesn't need for it's operation. While the
first-generation microkernel Mach still kept the drivers in kernelspace, the
second-generation L4 allows (actually, forces) us to have the drivers in

However, userspace is a very broad term in the case of a microkernel based
system: It ranges from the most basic system services up to application
programs. Technically (from the processor's perspective), this is all the same:
Either code runs in full privileged mode (kernelspace), or in limited privilege
mode (userspace). No other distinction. However, to users and programmers,
there is a considerable difference between the level of the basic system
services, and that of application programs.

In Hurd/L4, there are some core servers directly above the kernel, providing
what is absolutely necessary for the system to run; and some more, providing
(with the help of the C library) a more abstract and convenient, POSIX-like
environment for applications to run in.

,------.   ,------.  ,------.
|      |   |      |  |      |
`------'   `------'  `------'
,------.        ,------. application programs,
|      |        |      | filesystems etc.
`------'        `------'
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ POSIX level
     ,----. ,----. ,----.
     |    | |    | |    |
     `----' `----' `----' system servers
 ,----. ,----. ,----. ,----.
 |    | |    | |    | |    |
 `----' `----' `----' `----'      userspace
        ,----------.            kernelspace
        | ยต-kernel |

Programs running at that level can directly use all the features available in a
traditional UNIX environment: Users and groups, access permissions, resource
management, program execution, processes, signals, threading, the filesystem,
streams, controlling terminals, access to standard utilites and the shell; and
of course the C library, to make use of all of these features from within a

The POSIX standard(s) define all of these features. Thus, we often talk of the
"POSIX environment" for convenience. It doesn't really refer to the standard;
it just describes the set of features available in a typical UNIX environment.
Throughout this document, the term "POSIX level" is used as a synonym for the
UNIX-like (POSIX compatible) environment provided by the Hurd. (Including also
the Hurd-specific extensions, like translators, generic RPCs, or direct access
to native facilities like the auth server for example, allowing implementing
alternative security schemes.)

It is roughly the level of functionality which is offered by a monolithical
kernel (plus C library) -- POSIX level can be considered what user space is in
a traditional UNIX system.

Note that even on such a monolithical system, various higher-level system
services run in userspace, using POSIX facilities. (Think of all the essential
daemons always running on a UNIX system.) On the Hurd, also all file systems
run at POSIX level, as translators. (Monolithical systems traditionally have
those in the Kernel, but some of them recently added an option of running file
systems in userspace too, e.g. via FUSE in Linux.)

So the Hurd system servers running *below* the POSIX level, providing the
facilities for those programs running above it, are roughly speaking what was
moved out of the kernel in the transition from a monolitical to a
mircokernel-based system.

Now when moving drivers to userspace, where should we put them? An obvious
place would be somewhere among the system servers, like the other stuff moved
out of the kernel. After all, they are very low-level and essential to the

Well, are they really? How about putting even low-level hardware drivers at the
POSIX level, among filesystems and applications, running them as (almost)
ordinary translators?


This proposal might sound crazy at first. But I'm not only pretty certain it
can be done, but also very very certain this is a thing we really want to have.

Still, this is the hardest part to explain... It's easy to have visions; to see
wonderful things in one's mind. It's not easy to write them down such that
others can get at least a glimpse of the vision... Anyways, here's me trying:

For one, with drivers at POSIX level, there is no need for a sophisticated,
full-featured special driver framework: Library functions, loading and
unloading drivers, communication amongst drivers and to the outside world,
configuration management/parameter passing, resource management -- it's all
already there, using the ordinary POSIX/Hurd interfaces. All we need are a few
extensions to the standard POSIX/Hurd mechanisms, to allow for driver-specific
stuff like IRQ handling and I/O port access.

Having no extra framework for drivers also means driver developement becomes
much easier: No need to cope with some limited environment. No need to learn
special APIs for a driver-specific library; driver registering/startup and
shutdown; memory management; threading and locking; configuration management;
communication to other drivers and the outside world; permission handling.
Drivers are written just like any other translator, using all your normal
progamming experience. All one has to know are a few fairly simple extensions
to the ordinary POSIX/Hurd interfaces. The only specific APIs a driver handles,
are access to lower level drivers via their filesystem interfaces, and
exporting an own filesystem to the world.

To the user, having no separate driver domain, means there is no longer need
for magic enchantations to manage drivers that are accessible only indirectly
through special interfaces -- as drivers are now ordinary programs, all the
standard system tools can be used instead. Starting a driver is only a matter
of setting a translator, for example. Parameters can be changed at runtime
using fsysopts.

Furthermore, drivers being ordinary programs removes the need for any
(imperfect) magic making drivers more accessible by giving them some semantics
of ordinary processes -- they just *are* ordinary processes, with all the nice
things that come with that. Perfect transparency.

Transparency is also achieved by the fact that, in a hurdish manner, all
connection/communication between the drivers happens through the filesystem.

There is one more advantage of drivers being ordinary programs, contributing to
the ease of driver developement: All the ordinary debugging tools, like GDB or
rpctrace, can be used in the usual manner. The fact that we use standard
filesystem operations for all the interfaces of a driver, also considerably
helps debugging.

Another problem solved by drivers residing in the normal application space: In
the "traditional" monolithic approach, there is always the dilemma whether some
functionality should be included in the kernel together with the low-level
drivers, or pushed to userspace. There is often no obvious separation line in
the functionality; but some division needs to be made due to the strict
technical boarder between kernel and userspace.

Putting the drivers in a special driver realm in userspace doesn't lift that
dilemma: There is still a strict separation line between the drivers and the
actual application realm. The solution is making some provisions that allow
putting everything, even the lowest-level drivers, into the application realm
-- drivers at POSIX level.

With hardware drivers, higher-level driver layers, and the actual applications
all in a single uniform framework, we get perfect consistency. Hardware
autodetection and configuration for example -- from the low-level drivers up to
application program modules -- is possible without any anomalies caused by
having to work at several different layers to manage a single piece of
hardware. If some application needs access at a lower level than usual, there
is no need for creating special interfaces circumventing the higher level
driver parts. Just plug in at the desired level in the hierarchy -- the driver
components not being any special, every program can take over their function.

It gives us unprecedented simplicity and flexibility in combining the various
components in the system; in managing configuration, ranging from fully manual
setup, over simple config files and custom shell scripts, to sophisticated
fully-automated configuration managers, or any combination thereof, uniformly
through the whole driver/application stack.

Hot-plugging isn't special anymore: From a fully dynamic system, with drivers
always being launched explicitely (automatically on system startup and hotplug
events, or manually by the user), to a totally static system, where drivers
are set up only once and remain there (using passive translators), everything
is possible. You can handle non-removable devices dynamically, to automatically
adapt to changed system configuration, or you can set up hot-pluggable devices
statically, like a mouse that is always connected to the same USB port for
example. You can even combine them in kind of an inverse manner, e.g. handle
connecting to a docking station by dynamically inserting a tree that has
drivers for all the devices in the docking station set up statically.

Another thing that becomes trivial is starting drivers on demand: Just set a
passive translator, and the system will take care of the rest.

The drivers themself can also be designed very flexibly: Maybe using a
simplistic approach with a single process handling everything. Or separate the
critical lowest-level parts (register access) from the more complex but
uncritical higher-level parts into two separate components (like KGI/GGI does).
Or even use several processes at various levels to handle all the bits: All can
be done easily and without negative consequences. No problems like having to
match components at both sides of some driver realm vs. user realm border.

There are some more advantages from drivers being ordinary programs: Not only
who is allowed to access which device is decided by standard UNIX file
permissions, but also who is allowed to install drivers for a particular device
can be managed that way -- just change the permissions on the underlying node.

Another very useful feature is that process accounting applies like to every
other process on the system. How much memory and CPU time a driver can get
compared to other drivers and normal programs, depends solely on the priority
it is given. In a more sophisticated resource management system, drivers can
get resources on account of the programs that access them -- thus the sound
card driver for example can get a high priority if the sound recording
application has a high priority and generously donates resources to the driver,
making sure the recording won't be disrupted by lower priority processes.
(Quality of Service)

Summing up, we get a *much* simpler framework; *lots* of more convenience,
flexibility, transparency etc. to the users/admins/system builders; and *lots*
of more convenience/considerably lower entry barrier for driver developers.

The only disadvantages I can see are more dependencies, and a slight overhead
here and there. (Due to using filesystem semantics instead of free-form RPCs,
for example.)


While I claimed earlier that I'm pretty sure what I'm proposing here is
possible, this doesn't mean I've worked it out in every detail yet :-) I tried
to think through all issues as good as I can; but having no experience with
driver development (only gotten dragged into it by my KGI work, and by the idea
of POSIX level drivers appealing to me mostly from a user's point of view), I'm
often at a loss here. Thus any comments, suggesions, corrections on this
section will be especially much appreciated.

Also note that this proposal builds on the original proposal for a driver
framwork using deva/fabrica by Peter de Schrijver and Daniel Wagner. While I'm
proposing some radical departures (drivers running at POSIX level making full
use of standard Hurd mechanisms, instead of running in a special driver domain
managed by deva and using many private facilities/interfaces), there are also
many things that can be taken directly from the original proposal. Wherever
something isn't explicitely mentioned here, it can be considered to refer to
the original deva/fabrica proposal.


As mentioned at the setout, the fundamental idea is to run hardware drivers as
more or less ordinary translators. They can use all the POSIX (UNIX-like)
mechanisms any other program can use. Like every program running on the Hurd,
of course they can also use the Hurd extensions to the POSIX facilities, or
access the more generic underlying Hurd interfaces directly.

Various drivers (translators) at different levels are combined to form a driver
hierarchy: Root bus driver, intermediate bus drivers (possibly nested, e.g. for
an USB controller connected to the PCI bus), leaf device drivers, and possibly
higher-level drivers (e.g. a sound driver accessing the specific sound card

The drivers at different levels communicate exclusively by file I/O (i.e. RPCs)
on filesystem nodes exported by the lower-level drivers.

The translators are only more or less ordinary, because there are obviously
things that are special about hardware drivers, requiring some additional
facilities (usually extensions of the standard mechanisms) not necessary for
other programs/translators. However, these facilities will be generic: A driver
can use some of them as needed, while other translators (non-drivers) -- and
probably even some higher-level hardware drivers -- won't use them at all.
Technically, there is no strict distinction between drivers and other

The drivers also do not get any special permissions from the system that other
programs do not have; all they need for their work is exported by the
lower-level drivers, using the standard filesystem mechanisms.

The following sections will discuss various topics relevant for hardware
drivers, and how they could be handled. (Sometimes requiring additional
facilities, often making use of the exising standard mechanisms.)


One interesting problem is handling dependencies between the drivers: What
happens if a driver tries to access some functionality that is not available
due to some other driver missing?

Note that this problem is not really specific to POSIX level drivers; it's only
somewhat more tricky, because the use of libc can make such dependencies less
obvious. Nonetheless, after considering it for a while, it doesn't seem to be
such a big problem after all: If we call a libc function that relies on some
specific device, it will just generate an error, like it does in many other
situations. The calling driver has to decide whether a failure in this call is
non-critical and can be ignored, or it's better to bail out. Nothing special
about it. The user (or driver manager) has to fix the order in which drivers
are loaded, if such a prblem occurs. I don't think there is anything the system
can or needs to do about it.

A special case of dependencies are drivers referencing themselfs. A console
driver for example obviously shouldn't try to print an error message on the
screen. Note that such self-reference loops could go over several drivers, so
they are not always obvious. (The error could happen in some lower-level driver
the console driver depends on, for example.) Still, this can be easily fixed:
The driver just needs to set some kind of lock preventing it from reentering
itself. Again, an issue to be handled by the individual drivers.


Another interesting point is loading the initial set of drivers, until we have
enough to fetch further drivers from disk. This too is an issue that is not
really specific to this proposal. It's only a greater problem here, because
there are more dependencies to fulfill before the framework becomes functional.

The usual way to handle this (on *every* system using dynamically loaded
drivers), is employing a ramdisk. It would need to contain at least the core
system servers, the necessary drivers, libc, ld, and a section of /dev (with
the necessary drivers as passive translators).

If we want to employ autodetection, or some other dynamic setup scheme instead
of passive translators at this stage already, more drivers are necessary, as
well as some program doing the detection and loading the appropriate drivers,
and possibly additional tools necessary to run the program (e.g. a shell).

Setting up the drivers is done by a special task that is started by wortel,
after all the other core system servers and the ramdisk are up. It does
whatever is needed to set up all drivers necessary for the rest of the bootup
process, until the real root partition can be accessed. (Root bus driver, some
more bus drivers and other core drivers, and typically a disk driver.)

Once the necessary drivers are loaded, the boot process can continue. (How this
happens is outside of the scope of the driver framework.) At some point after
the root partion is mounted, the initial driver tree is moved off the ramdisk
to the real root partition somehow. Additional drivers can then be loaded by
whatever method is used, through boot scripts and after the system is up. (This
is also out of scope; some suggestions can be found in the "Why?" section.)

An alternative approch could be using special minimal drivers for the bootup,
until we can replace them by the proper ones. The idea being that the boot
drivers can be considerably simpler, as they need support only very rudimentary
functionality, only read operations, no multiuser capabilities (resource
management/accounting, access permissons), no speed optimization; and most
important, they can use some much simpler framework. They could even be
statically linked into a single program.

The downside is of course that there would be a (slight) duplication of effort
between boot drivers and real drivers. It could also lead to an unnerving
situation where some device is supported by the boot drivers but not the real
ones, or the other way round. So this is a somewhat suboptimal solution.

Well -- unless we hack GRUB to allow us using its drivers. As GRUB needs
exactly the same drivers anyways (or we couldn't boot the machine at all), why
not simply reuse them for our bootstrapping purposes?

There is one more issue with bootstrapping: A set of capabilities for handling
system resources (IRQs, memory regions for I/O and standard I/O ports) needs to
be distributed to the drivers managing those resources. The capabilities should
be passed to the driver setup task from wortel, and need to be forwarded to the
appropriate drivers, probably using a special RPC.

This RPC could either be done immediately (after loading the drivers, or
loading them automatically if they are set up as passive translators), or upon
request from the drivers. In the second case, we explicitely need to make sure
we are talking to the translator attached to the node where the driver is
expected to be. (Otherwise, someone else might sneak in and ask for the
critical capability!)


The fundametal distinction of hardware drivers against normal programs is of
course that they have to access the actual hardware devices. This means access
to regions in the I/O address space, as well as special regions in the normal
address space (for memory mapped I/O).

It is important to understand that drivers do *not* get any special privileges
to do the hardware access, not available to other programs. All hardware access
is managed by standard Hurd mechanisms.

The *only* driver that actually gets special access priviliges is the root bus
driver, which is the primary manager for all relevant memory and I/O regions.
The child bus drivers can access the subregions relevant to them through file
I/O operations on nodes of the filesystem exported by the root bus driver, and
in turn export subregions to their children. (Further bus drivers or leaf
device drivers.)

Whenever possible, a driver should attempt to export only those regions that
are really used by the device represented by the respective node. This is not
always possible: In some cases the parent driver has no way to know what
regions the device occupies without knowing the specific device. (E.g. for
non-PnP ISA devices.) In such a case the parent will have to offer a special
interface allowing the child driver to register for regions, which will then be
exported. It just has to trust the child driver not to request wrong regions.

For faster access to memory, the child can also request the region to be mapped
to it directly (instead of having to use file I/O operations for each and every
access), by means of the standard mmap RPC. If the parent doesn't have direct
access to that region yet itself, it will request a map from its own parent,
all the way up to the root bus server.

Direct access to memory regions can usually be safely granted, unless there is
some different memory region within the same memory page. (Shouldn't ever
happen, considering that the memory address space is big enough to allow for a
wasted page fragment for padding.)

I'm not sure how mapping I/O regions should be handled. My hope is that all
performace-relevant buffers will always allow memory mapped I/O, so the default
RPC method is fast enough for the remaining I/O registers, and we need not
bother about mapping at all.

Note that I/O registers are much more critical anyways, as the I/O space on x86
was originally severely constrained, resulting in many devices sharing a single
4k page in I/O space being standard. I'm not sure this can be safely handled in
a simple manner.

On a different note, even with access restricted to memory and I/O regions that
are really used by the device in question, we are not always on the safe side:
DMA for example can cause the device to read/write wrong memory regions, if
some bogus addresses is stored in the DMA setup registers. Just like with
regions requested by the child, we can only trust the child driver not to do
something harmful.

Of course, only a privileged user will be allowed to load such trusted drivers.
(This has to be ensured by appropriate file permissions on the relevant nodes.)

Also, those cases are candidates for modular drivers, using an extra process
for the dangerous low-level stuff, or better even several ones for individual
parts of the low-level functionality. (E.g. a micro-driver only for the DMA


There are basically two kinds of DMA: The old ISA DMA uses a central DMA
controller in the chipset, with a number of DMA channels statically assigned to
individual devices. This one is easy to handle: Simply use a driver for the DMA
controller, which handles requests from the drivers of devices using DMA. To
make sure a client actually has the permission to read/write the requested
memory region, it is required to map that region to the DMA driver.

The other kind of DMA is more problematic: Modern systems (PCI) use a builtin
DMA facility in the individual device, allowing the device to access RAM
completely on its own. This means however that only the specific device driver
knows how to setup DMA -- there is no way for the parent driver to prevent the
child from doing something harmful, by writing wrong values into the device's
address registers.

While this efficiently prevents really optimal robustness, the robustness can
be improved by a few orders of magnitude using modular drivers, as explained
above: Use an extra process (sub-driver) that is responsible exclusively for
setting up DMA for the specific device. (Of course checking for a valid memory
mapping from the requesting process beforehand.)


I'm not sure how IRQs should be handled. If possible, I'd go for a solution
using a central IRQ driver receiving all interrupts (L4 allows setting a
receiver thread for each interrupt slot, which in trun gets the interrupts
through special IPCs), and passing them on to the individual drivers through
IRQ ports exported as a filesystem. Connecting to those ports would be managed
by the bus drivers.

There are two problems with this approach: For one, the interrupt driver doing
an RPC to the actual driver on each interrupt introduces an overhead, which can
be considerable in some extreme cases. (Fast serial ports and gigabit Ethernet
can generate up to several hundred thousand interrupts per second.)

Also, I'm not sure whether a central IRQ driver could handle PCI shared IRQs.
(Did I mention already that these should be outlawed? ;-) )


Connecting the individual drivers to form a driver hierarchy is simple, as it
all happens through the filesystem, setting up translators referencing each
other as necessary.

There are two approaches to the structure of the driver setup however. The
simpler one is to put all translators directly in /dev, and let them only
indirectly reference each other. (The locations of the parent drivers need to
be passed on the command line.)

A more elegant and intuitive approch is to organize the translators themselfs
as a hierachy: The root bus server exports a couple of nodes, on which some
core drivers are set. One of them is the PCI bus driver, which in turn exports
one node for each device attached. On each of those device nodes, a driver for
the respective device is set. Some of these are leaf devices, while other are
further bus drivers, like IDE or USB. And so forth.

One problem with this approach is that, if we want to setup the drivers
statically using passive translators, we need some method to permanently store
a hierarchy of passive translators, e.g. using some special translatorfs. (Note
that this is an important thing also in other situations, so most likely we
will get something to handle this sooner or later anyways.)

In reality, we do not have a perfect tree structure anyways. Some driver might
depend on several lower-level drivers, for example on the bus driver and the
DMA driver. In this case we need to decide which is the major parent on which
to set the driver, and which one will only be supplied through a command line
option. This already suggests that in practice, we will have some combination
of the possible approaches mentioned above.


As stated before, hardware drivers are not distinguished from other
applications (translators) by any special hardware access permissions -- with
the single exception of the special drivers at the core of the driver system.
(Root bus driver for memory and I/O access, IRQ driver for interrupt handling.)

All hardware access by regular device drivers is done using filesystem
operations (RPCs) on nodes exported by the lower level drivers. (Bus drivers,
IRQ driver.) In principle, any program could access those nodes.

So how can we prevent unauthorized hardware access, if any program run by any
user could access the hardware nodes? Simple: By restricting file access
permissions on these nodes. So while anybody could run a programm (possibly
self-created) that tries to do hardware access, the program won't be able to
actually access any hardware unless the user who started it has the necessary
access permissions on the reqired hardware node.

Who is effectively allowed to access the nodes depends on the policy of the
system vendor and/or administrator: Critical nodes, which allow breaking system
integrity if used improperly, will usually be only accessible to a privileged
user. (Root only, or optionally some special system user.) In a typical
workstation setup, some nodes might be accessible to the user logged in an the
system's primary console. (As that is the one who can physically use the
devices.) Devices that can be safely shared by higher-level drivers can even be
accessible to everyone.

UNIX file access permissions also allow more sophisticated setups: The
administrator might choose for example to allow ordinary users to load drivers
accessing even some critical nodes, if the drivers themselfs are only from a
trusted set. This can be achieved by using suid or sgid on the trusted drivers,
so users can run them even if they have no direct access permissions to the
relevant hardware nodes.

   _Small Spaces_

Small spaces are an optimization in L4 on x86, using the segmentation features
to map a task's address space into all other tasks' address spaces. That allows
switching to and between the tasks in small spaces at any time without doing a
full context switch. (Otherwise a context switch is quite expensive on x86.)
Thus drivers, which are called often due to frequent hardware interrupts or
requests from the users, do not cause a considerable penalty.

To allow for small spaces to be used, we need to make sure that a task's
address space is compact (no big holes), so it actually fits in a small space.
This isn't driver-specific, though -- in principle, *any* task that is small
enough is eligable.


I don't really know what kinds of locking issues can be involved with hardware
drivers. I think there should be nothing requiring special handling.

Whenever some resource needs to be used by more than one client, it should be
handled by an extra driver, which makes sure that access is properly
encapsulated. Priority to access the resource has to be handled by the
accounting mechanisms.

If we only need to make sure some operation happens quick enough, no real
locking should be necessary. Instead, temporarily raise the priority, again
using accounting mechanisms.


As already mentioned, hotplugging isn't really any special according to this
proposal. If a bus driver detects a new device, it will simply create a new
device node in the exported filesystem. The user could now set a driver on that
node manually, or some hotplugging daemon (shell script or C program or
whatever) can listen for file/dir change notifications, and perform the
necessary actions to load a driver and/or set appropriate access permission,
once a new node appears.

Unplugging means the driver node will disappear. The driver should detect this
and exit cleanly. Higher-level drivers will in turn detect the FS exported by
the driver going away. A hotplug manager will also get a file/dir change
notification, and can take appropriate action if necessary.


>From the above considerations, there are only few extensions over the standard
POSIX/Hurd mechanisms necessary. Most of the driver infrastructure is actually
contained within the core drivers, like the root bus driver, IRQ driver etc.,
as well as the intermediate bus drivers.

What extensions/additions we need are:
- The initial driver setup task
- The interfaces for passing the hardware access capabilites
- Handling of RPCs on these capabilities in wortel
- Maybe the interfaces for passing direct access on I/O ports

That's it. Everything else is in the drivers themself, and the interfaces
between them.

Of course, the core drivers are special and need to be considered part of the
framework, unlike other drivers that can just be plugged in. Defining the
interfaces for the bus drivers (and maybe some helper functions for handling
them) is also part of the framework. So when implementing the proposal, all of
these have to be considered.

Note that heavily depending on the functionality offered by the Hurd, the
developement of the driver framework is inherently interwoven with the
developement of the Hurd on L4 itself.

Implementing the proposal should probably start with getting a ramdisk (or
hacking grub2 so we can access the real disk), and writing a first take on the
driver setup task and the hardware capability handling. Having this, the root
bus driver and PCI driver can be implemented. (Possibly without direct memory
region mapping at first -- this is an optimization that is not strictly
necessary.) At this point, we can start implementing the first device drivers.
Meanwhile, the IRQ driver and more bus drivers can be created, gradually
extending the range of possible device drivers to write.

All along the way, missing functionality in the Hurd needs to be filled: libc
functions, core server facilities, process accounting.

Note that, while the way outlined is probably the logical route following the
dependencies, different approaches are possible. In principle, we can write
some first preliminary drivers just now: Running the (few) drivers directly
from wortel for now, not implementing the proper bootstrapping procedure using
filesystem and process (translator) startup mechanisms; implementing the bare
RPC interfaces directly, without help of the libc and libnetfs wrappers for the
filesystem abstraction; and passing the RPC ports through the environment,
instead of using filesystem lookups -- all the missing facilities can be
(temporarily) avoided. This way we can get us started right away, creating the
badly needed IDE driver for example. It can then be moved to the proper
infrastructure later, once the foundations are in place.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]