bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Filter design for nsmux


From: Sergiu Ivanov
Subject: Re: Filter design for nsmux
Date: Thu, 23 Apr 2009 16:29:40 +0300
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/23.0.60 (gnu/linux)

Hello,

<olafBuddenhagen@gmx.net> writes:
> On Sun, Mar 22, 2009 at 09:37:55PM +0200, Sergiu Ivanov wrote:
>> <olafBuddenhagen@gmx.net> writes:
>> > On Sun, Feb 22, 2009 at 08:56:50PM +0200, Sergiu Ivanov wrote:
>
>> > I said running multiple translator instances in one process, not
>> > sharing one translator instance among clients... That's quite a
>> > different thing :-)
>> 
>> Hm, that's really a different thing... I understand that this question
>> will be offtopic, but I'll still ask you: do your words imply that we
>> will somehow run multiple processes within a single process?..
>
> Well, processes are defined by address spaces... But I believe I know
> what you mean: running several programs (or instances of a program) as
> totally independent threads in a single address space, so logically they
> are almost like multiple processes, although they share the address
> space, and thus are technically one process.

I see... Thank you for the explanation :-) I completely forgot about
threads...

> However, I believe that really needing thousands of translator instances
> *in parallel* is rather an ususual case, which probably will occur only
> in extreme situations, unless something is poorly designed. A much more
> realistic situation is only having a few instances at most runnig in
> parallel at any given moment, while many more instances are created and
> destroyed in short succession.
>
> Here, the major problem is not resource usage of running processes, but
> rather the cost of process startup. And I believe that this should be
> the main optimization target.
>
> So the idea is to have only the first instance do a full translator
> startup. It would do as much common initialization as possible; and then
> some mystical lightweight mechanism would be used for launching the
> other instances from there -- something that is much cheaper than normal
> process creation...

Hm... This sounds very-very interesting... However, I'd think that such
functionality could not be implemented without doing some extensions to
GNU/Mach, what do you think?

>> Another question now (probably, I'm repeating myself already...): how
>> problematic is that the filter should know about nsmux? After all, the
>> filter's main real use case is running in a dynamic translator stack.
>> I understand that having the filter capable of running as a normal
>> translator would be a nice option, but I fail to find the absence of
>> this feature a very bad thing.
>
> Actually, this doesn't prevent the filter from running outside nsmux. It
> only means that the filter must be aware when it is running on nsmux,
> and handle it specially. That's not really a big deal technically -- but
> it's not very elegant, as it means things are not as orthogonal as I'd
> like them to be. nsmux is not fully transparent -- certain kinds of
> other programs need to handle it specially to work correctly...

I see... So, do you think we should still try to pursue orthogonality,
or shall we stop at this design idea for now? (I mean adding a special
RPC for the filter to obtain the node without shadows).

>> > The RPC for getting the underlying node logically would belong to
>> > the file interface (fs.defs)
> [...]
>> > The alternative is creating a new interface just for this special
>> > call. We wouldn't need to touch existing interfaces; but it would be
>> > rather unelegant...
>> 
>> I am somehow more inclined to creating a new special interface for
>> nsmux... Could please point out the reasons why you consider this
>> solution rather unelegant?
>
> See above: logically, the RPC for obtaining the underlying node belongs
> with the existing interfaces. The operation is not really
> filter-specific -- it could be useful in other situations as well.
>
> The RPC for getting a non-shadowed version of the node on the other hand
> is very specific, so having a distinct interface only for this is
> probably really the best approach.

OK. To sum up: do I understand it right that we shall add the RPC for
retrieving the underlying node of a translator to fs.defs, and a create
a new special interface expressly for the RPC for getting the
non-shadowed version of a node?

>> >> [...] there is no already existing RPC for going one translation
>> >> layer lower.
>> >
>> > My point is that traversing bottom-to-top isn't any more natural, as
>> > it requires obtaining the untranslated node at the bottom of the
>> > stack, and there is no existing RPC for that either.
>> 
>> Hm, I think I cannot understand something properly here: we *do* have
>> the possibility to get the untranslated node at the bottom of the
>> stack by opening the node with O_NOTRANS, don't we?
>
> No. We can't obtain the untranslated version of a node we have. All we
> can do is reopen the same *file name* -- which is a totally different
> thing! The filter does *not* have the file name of the node it filters.

Yes, this is true. I wonder now, why one can reopen a node with
different O_READ/O_WRITE bits, but cannot do this with the O_NOTRANS
flag...

You probably remember that at some moment in this discussion I said that
one cannot reopen an existing port with different O_* flags. You told me
then that using dir_lookup directly, one can reopen with, for example,
O_READ a port opened with 0 flags. However, I've just tried it and
indeed O_NOTRANS is an exception, whereas I used to think that it is
not.

> The hack with O_NOTRANS on translator startup to obtain the untranslated
> node, only works with special handling in nsmux. You can't do that when
> running the filter on a normal file system, implementing only the
> standard interfaces. It requires a new operation -- even if this new
> operation is hacked as a special case in an existing RPC.

I see... So, it is natural to implement this feature as a special RPC,
without hacking any existing one, do I understand everything right?

>> >> I thought we could merge the functionality in a single node because
>> >> it seemed to me that another node would mean another context
>> >> switch...
> [...]
>> > (Note that this would actually be a case of translator stacking
>> > optimization -- i.e. a use case for the "mobility framework"
>> > Frederik is working on. I'm not quite sure whether it's better to
>> > create special solutions for various use cases first, and only later
>> > factor out a generic stacking framework, or only work on such
>> > optimizations once the generic stacking framework is in place...)
>> 
>> Hm... I'm trying to follow your discussion with Frederik, but I'm not
>> sure I can understand how this could be a use case for the ``mobility
>> framework''. I guess I should go and read the latest mail in you
>> discussion, which I skipped do to lack of time.
>
> What you wanted to do here is avoiding a context switch, by
> transparently stuffing the functionality of two logically distinct
> translators in a single process. This is *exactly* what translator
> stacking is about...

Yes, this was exactly what I meant to achieve by merging shadow and
proxy nodes together. Nevertheless, I am rather surprised to hear that
translator stacking is about stuffing the functionality of two distinct
translators in a single process... Actually, what I understand under
``translator stacking'' is *joining* several translator processes, not
*merging* them. Do I understand something wrong?

Regards,
scolobb




reply via email to

[Prev in Thread] Current Thread [Next in Thread]