[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [ANNOUNCE] Introducing Codezero
Re: [ANNOUNCE] Introducing Codezero
Thu, 02 Jul 2009 21:20:31 +0300
Thunderbird 184.108.40.206 (X11/20090608)
I have taken a closer look at the existing Hurd implementation. It looks
very much like what I had in mind when meaning "microkernel based OS
using plan 9 design principles". In particular, Plan 9 introduces:
* Private namespaces i.e. each process having its own view of the
* Services as file servers (e.g. tcp/ip stack, ftp service, windowing
system, console etc.)
* Union filesystems i.e. multiple filesystem trees merged at a single path.
Essentially a Hurd translator setup with settrans is what a file-based
server does when mounted in Plan 9.
Plan 9 differentiates by implementing a monolithic kernel, and Hurd,
having a microkernel design comes closer to what I plan to do. Having
said that, let me get into your arguments:
Bas Wijnen wrote:
I don't have enough idea about the code to know how good that would
work. Does your kernel use dynamic memory itself, by the way? In other
words, can the kernel get out-of-memory errors?
Currently, we do have this. There is a very rigorous memory allocation
policy though. Page tables, thread control blocks and space structures
are allocated which are only related to thread and address space
creation. These can be easily controlled by introducing limits or
capabilities on thread and address space creation.
In my plan, there will be servers, (i.e. C programs) that simply do IPC
in a controlled fashion. The design won't be centered around
interface/implementation type of remote object instantiation as you
would see in say, Corba, Java RMI etc. This is because the goal is to
reduce interfaces down to a minimum generic set of calls such as
open/close/read/write ... just like in Plan 9. So instead of having a
different interface for each object, there will simply be a file-based
interface for everything.
I would expect this to be very harmful for performance. Either you have
a useful interface for everything (including stream and packet
transfer), but then the system will be over-designed for just about any
server, or you have a slim interface, and many servers will need to be
creative (read: slow) to fit their service to it.
I don't see why performance is degraded by using a file-based IO
interface. OS services are very well suited for file-based IO. For
example the tcp/ip socket interface in Unix is there for only historical
reasons and a good example to the kitchen sink problem in the Unix API.
It could well be implemented as file IO. Similarly, ioctl and terminal
IO are notoriously badly designed. One could have "ctl" and "data" files
as in Plan 9, to implement driver control and data flow (or use
extensible file attributes)
File-based API should work for most services such as communication
protocols, drivers, console etc. Please elaborate on why you oppose it.
Capability management needs to be done during all initialization. I'm
not sure what you mean by "mounting a new service on a process
namespace", but I suppose it means allowing access to a certain service
by a process?
Yes, from what I understand in Hurd terminology, this seems to
correspond to when you set up a translator via "settrans". By settrans,
(or mount) you add a new service to your process namespace. By
restricting the access to the vnode of a service, access is controlled
on that particular service.
Also from what I understand, the *object* you mount may be implemented
by a server that implements other objects in the same namespace. In this
case, fine-grained capabilities help on deciding what interfaces or
object instances of a server is allowed. For example, if you have
/dev/tcp/port0 and /dev/tcp/port1, both objects are served by a single
server called "tcp" based on capabilities on those objects. This is as
far as I would go in object-based design :)
What's better though, is that new and legacy services can co-exist in
a flattened hierarchy that doesn't increase system complexity.
If that means that you require every process to be able to access FS0,
thus in practice implementing global thread identifiers with which any
thread can communicate with any other thread, it is't better IMO. ;-)
This I don't yet understand in capabilities. It seems to be related to
authorization/designation concept. Why would you need to avoid global
thread ids? The microkernel can control and deny any ipc communication
(at least I think I can implement it on Codezero), regardless of thread
id. The downside I could think of is that each thread would need a
capability for each thread it wants to communicate, inflating the
in-kernel data structures.
This is how things currently look like by the way, in case you want to
get a quick glance:
Re: [ANNOUNCE] Introducing Codezero, olafBuddenhagen, 2009/07/10