bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: slow access to files


From: Niels Möller
Subject: Re: slow access to files
Date: 06 Nov 2001 09:39:50 +0100

Farid Hajji <farid.hajji@ob.kamp.net> writes:

> The problem right now is that there is no memory sharing between normal
> clients and the filesystem translators. Here, data is simply copied across
> a costly IPC path, thus wasting a lot of CPU cycles.

I thought Mach had some mechanism that allowed ipc to send larger
amounts of memory (say a few pages at a time) between processes. If
that mechanism isn't used by Hurd I/O, it would be interesting to know
why.

Hmm, I'll try some guessing, and hope someone more knowledgable will
correct me:

For some reason (security? lack of notifications when pages are
modified?), a store can't give away writable pages to it's users. So
when a user want's to read a page, the store has to get a fresh page,
*copy* the data into the new page, and then send it across ipc.

Alternatives might be (i) to give the user raw read/write access to
the original page, or (ii) create a read-only or copy-on-write clone
of the orignal page, and send that (and copy-on-write is also bad, if
the usual case is that a copy happens eventually).

The point is that even though the ipc:s are quite expensive, they
should not require that bulk data be *copied* when passed between
processes. If they do, the ipc mechanism or the protocols that use it
are broken.

The only copy that really is needed (i.e. is hard to get rid of, no
matter what fancy ipc or kernel features you have) is the final one
into the buffer the user provided to read. That buffer should be
private to the process, and it isn't usually page aligned, so it has
to be copied, and on the Hurd, that copying ought to happen in glibc.

/Niels



reply via email to

[Prev in Thread] Current Thread [Next in Thread]