[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
drivers for l4 (2)
drivers for l4 (2)
Mon, 24 Mar 2003 12:21:19 +0100
Gnus/5.090016 (Oort Gnus v0.16) Emacs/21.2
Here are some thoughts we (Peter, Marcus and me) had. As usual any comments
The driver framework basically has to worry about IRQ Handling, bus management
and access to hardware and hotplugging (driver loading,...).
+ IRQ handling is basically done by L4 for us. The only thing left is handling
mapping bus IRQ to a system interrupt. This can be easily done by the bus
How to install a interrupt handler:
Precondition(s): - All interrupt handler code should be relocable.
+ The code is mapped into the interrupt thread's address space.
+ The interrupt thread's address space loader thread
copies and relocates the code and adds it to the chain of
handlers (linked list) to be executed for the specific IRQ.
+ A bus manager has to offer the following API:
+ enumerate children: lists all children of this bus. Obviously
children can be bus managers themselves.
+ alloc resource
+ free resource
+ map resource: provides access to a resource in the requesting AS
+ unmap resource
+ activate resource
+ desactivate resource
+ get interrupt
resources can be:
prefetcheable memory address space: address space which is
accessible using normal CPU load/store instructions
and where the CPU may assume that reading a memory
location will not have side effects.
non-prefetcheable memory address space: address space which is
accessible using normal CPU load/store instructions.
Reading a memory location from this space may have
DMAable memory: Physical memory which can be accessed by the
device. (More thoughts on DMA at the end of
for PCI this API can be extended with:
+ enable i/o
+ disable i/o
+ enable mem
+ disable mem
+ enable busmastering
+ disable busmastering
+ read configspace
+ write configspace
(other busses probably need their own extensions)
+ The hotplug manager implements the following API:
+ add device: Announces a new device in the system.
+ remove device: Device has gone
The hotplug manager will load the appropriate device driver into one of the
device driver AS's. The hotplug manager create or deletes these AS's as
necessary. Whithin each AS a device driver management thread runs which handles
the loading and unloading of the drivers.
Device drivers themselves are modules which are relocateable so they can be
loaded anywhere into the AS. This can be done by building them as Position
Independent Code or having explicit relocation tables. The latter is probably
the best choice, except if we would load a driver multiple times for multiple
devices of the same type. They are loaded and started by the management thread.
They also get a reference to a port of their parent bus manager so they can
ask for resources, etc.
+ All drivers are loaded in their own address space for now
+ Drivers which are necessary for system bootup (HD driver, console
driver, ...) are loaded by grub.
+ The hotplug manager and a config file is also loaded by grub. The
config file has also a list of device names/identifier strings
which tells wich driver can handle which device. This list will
be discared after the system is able to load an editable list from
+ The hotplug manager gets a list of the driver loaded by grub with
+ All drivers are loaded as separate process by the resource manager.
+ The root bus manager starts as first driver. The root driver reads
possible BIOS data (or the needed information which driver should be
loaded is compiled in). Then it asks the hotplug manager for the
driver which should be inserted into the driver tree.
+ All output to stdout and stderr will be buffered in memory untill the
video driver is ready to dump it on the screen. It might be usefull
write it to a serial port.
For actually bootstrapping the resource manager (rmgr) with some small additions
could be used:
+ keeps a list of the modules started, their name, their grub
commandline and their thread id.
+ no reallocation needed for modules (Why is this done anyway? Is it
to keep sigma0 simpler (one-to-one map?))
Some thoughts on DMA. We aim of course for a zero copy model (if it's possible
+ It is desirable to allow users to provide data for DMA transfer.
For this the driver needs to be able to determine the physical
+ Physical address must possibly be passed by the user. This possibly
means that the user has to be trusted. It depends a bit on how
Neal's VM server works (how are pages mapped into tasks, idem potent
or at any position?).
Or the driver needs to know from which address space the request
comes. Is this possible in L4?
+ The user MUST wire down the memory for DMA transfer into memory for
the whole time it is running in the driver RPC. Otherwise it might
happen that the page is physically reused by another task, and your
love letters end up in the ethernet or so :)
As tasks do their own VMM, wiring down memory is not a privileged
operation. This is good news. The only issue is if something goes
wrong, ie, the task was overcommitted, it was asked to give out
pages back, and it can't. Then it will obviously be killed, but I
don't know if we can still cancel a pending DMA operation in the
device. Maybe this case is nothing to worry about.
An alternative might be for a driver to provide DMA memory to the user.
This is probably sufficient for DMA at fixed location, but won't be too well
for randomly located DMA transfers (ie ethernet packets), unless probably if
we allocate the buffers from the DMA memory provided by the driver.
And here now some question or thoughts we haven't solved yet:
+ In which address space is the resource manager started?
+ We can take control of the interrupt threads by migrating them to our own
address space. How does thread migration work? Restrictions?
+ How does the process server know about the booting threads?
think about API's for other busses (ISA, Nubus, Zorro, ...)
think about API's for message busses such as USB, firewire, scsi, fibrechannel
and all things we forgot.
- drivers for l4 (2),
Daniel Wagner <=