l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

IPC etc. (was: Future Direction of GNU Hurd?)


From: Olaf Buddenhagen
Subject: IPC etc. (was: Future Direction of GNU Hurd?)
Date: Sat, 20 Mar 2021 19:02:39 +0100
User-agent: NeoMutt/20170609 (1.8.3)

Hi again,

On Mon, Mar 15, 2021 at 07:58:33PM +1100, William ML Leslie wrote:
> On Mon, 15 Mar 2021 at 05:19, Olaf Buddenhagen <olafbuddenhagen@gmx.net> 
> wrote:
> > On Thu, Feb 25, 2021 at 06:48:11PM +1100, William ML Leslie wrote:

> > > I am still hopeful that someone will figure out how to do async
> > > right at some point.

> It's somewhat easy to stay as close to io_uring as possible, having
> ring buffers for messages ready to be sent and messages to be
> received.

It's quite unusual to treat generic async IPC as the same as async
I/O... Though to be frank, it actually is the same in the IPC approach
I'm envisioning :-)

The issue with io_uring however is that it's designed for the monolithic
system use case, where all I/O is handled by the kernel. In a
microkernel environment, rather than having a shared memory protocol
between userspace processes and the kernel, the obvious approach is to
have these implemented directly between the clients and servers
instead...

Of course that doesn't help if we want a unified I/O queue for requests
to multiple servers: however, it's not immediately obvious to me that
implementing a mechanism for that in the kernel (instead of client-side
handling) is indeed a good idea...

> We're a bit spoiled in the capability space, though.  Many of our
> protocols include the ability to send a message before its target or
> arguments become available, as well as the ability to only have a
> message sent on receipt or failure of a previous message.
[...]
> The third solution is to add logic to the kernel to perform further
> sends when a message is received, and complicating the kernel at all
> is frankly a bit scary (and how should we time-bound that?).

If we take that to the logical extreme, we get grafting, i.e. uploading
arbitrary logic expressed in a safe language -- a concept that has been
proposed in academia decades ago, but AFAIK hasn't made it into the
mainsteam until recently, in the form of eBPF in Linux...

Intuitively, it doesn't feel like accounting would be much different
than for individually issued kernel requests?...

Of course the question is whether it's really worthwhile. It would
arguably provide more motivation for having I/O queue handling in the
kernel: since in many cases, the grafted logic in the kernel could
handle things without a roundtrip to the client, thus saving context
switches. However, it's not immediately clear to me that the downsides
from the indirection through the kernel wouldn't actually outweigh the
savings...

Another question is whether grafting using eBPF (or some other language)
isn't actually powerful enough to implement custom queue handling,
rather than needing a fixed mechanims for that in the kernel?...

For my design, for a while I was thinking about implementing handling of
more or less complex generic logic in IPC requests -- though in my case
that would be handled by the server handling the initial request (and
passing on to other servers as needed) rather than the kernel.

However, I realised that the way I'm approaching IPC, it is actually
quite natural to express the major use cases in the IPC protocol -- thus
mostly eliminating the motivation for doing generic logic...

> I plan to do a little more on a per-process basis.  A few turns before
> a process is scheduled, we make sure to page-in anything the process
> is about to touch.  A capability provides the means for a process to
> say which addresses it will soon access, in addition to the
> instruction pointer.

That's an interesting idea... Not sure though whether there are actual
use cases for that: in theory, it could reduce the latency for processes
that are resumed after having been idle long enough to get paged out --
but only if they know in advance that they need to resume soon... I
can't think of a situation where a process needs to react quickly to an
event scheduled long in advance?

> An aside: I absolutely want to have optional orthogonal persistence
> per-application.  Imagine having emacs up and ready to go the moment
> you log into your machine.  Yes please.

How would that work per-application? Don't we have to restore the
application's environment (including capabilities) -- meaning it has to
be more or less system-wide?...

Either way: yes, I totally want the ability to seamlessly resume any
activities (that do NOT include Emacs in my case :-P ) after logouts,
power cycles etc. Indeed I consider it among the two or three most
important must-have features of my design. (Maybe *the* most important
one? It's hard to rank them...)

However, I don't intend to implement this with orthogonal persistence
based on preserving the entire memory image of processes.

This type of persistence mechanism is very tempting, because it feels
like it provides very desirable functionality with very little effort.
The problem is at the edges, where it doesn't help: things like
upgrading software; migrating part of the environment to a different
system instance; recovering from crashes involving memory corruption...

Of course transparent orthogonal persistence doesn't *preclude* handling
these situations: we just need to serialise all precious state to carry
it forward explicitly... The thing is, once we have such a serialisation
mechanism, why do we need the other persistence mechanism at all? Better
make serialisation the sole persistence mechanism; ensuring it works
really well, rather than just being a poorly maintained backup mechanism
for special cases...

(Shap will probably tell me that I got it all wrong or something: but
the truth is that my conclusions on this matter haven't budged over the
past 15 years -- and I can't imagine them budging over the next 15 :-) )

> > I don't see a fundamental difference... Whether it's ext2, SLS, or
> > some sort of RAM disk: in each case you need a driver that can be
> > loaded by the bootloader?...
>
> It's just a matter of complexity.  The various pieces that implement
> the SLS are less than 5000 lines, wheras libstore is over 7000 on its
> own; libdiskfs 12000, and then libext2 on top of that.  But yes, it's
> somewhat like an initrd.

Why would that matter, though? You aren't limited in the size of the
image loaded by bootloader, are you?...

-antrik-



reply via email to

[Prev in Thread] Current Thread [Next in Thread]