bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: GSoC: the plan for the project network virtualization


From: olafBuddenhagen
Subject: Re: GSoC: the plan for the project network virtualization
Date: Thu, 26 Jun 2008 06:32:02 +0200
User-agent: Mutt/1.5.18 (2008-05-17)

Hi,

On Sun, Jun 22, 2008 at 11:26:19PM +0200, zhengda wrote:
> olafBuddenhagen@gmx.net wrote:

> I'm creating the proxy of the process server. Hopefully, it's not too
> difficult.

Well, if you run into serious problems, you can postpone this for now
and work on the other stuff... But I hope you can get this working
before the end of the summer :-)

> "root could delegate access to the real network interface, and the
> user  could run a hypervisor"? How do we do it? create another program
> that is run by root and that  communicates with the hypervisor?

To be honest, I don't know the details. In a capability system, it
should always be trivial to delegate access to something. But I do fear
that the Mach device interface does not really fit there -- that it's
not possible to directly hand out a capability for a single kernel
device.

If that is the case, we would again need a proxy for the master device
port, which would forward open() on the network device, but block all
others.

> Maybe we can do like this: the user is allowed to run a hypervisor but
> the hypervisor cannot access  the network interface. If the user want
> to access the external network, his hypervisor must  forward the
> packet to the root's hypervisor. The root's hypervisor is responsible
> to control the traffic from the user. The user can do whatever he
> wants in his hypervisor.

That was the idea for the case when root wants to delegate partial
access to the device, i.e. to a single IP or range of IPs. Delegating
full access to the network interface should be possible without that --
see above.

>> I don't think there is a need to understand all packets -- in most
>> cases, a simple IP-based filter should be sufficient. But of course,
>> you could employ other filters if necessary. The modular approach
>> suggested above makes this easy :-)
>>   
> Maybe BPF can also work for this case.

Indeed :-) As discussed on IRC (
http://richtlijn.be/~larstiq/hurd/hurd-2008-06-20 ), I was actually
thinking of this possibility, and only didn't mention it in the previous
mail, to avoid confusion. The filter could be implemented by a server
that runs the BPF code in user space, and a wrapper part that generates
the necessary filter rules from a simple command line.

Not only this would simplify the implementation of the filter, but also
allow for later optimizations.

Note that I suggested on IRC that it would also be possible to implement
the hub with BPF, but not necessary for now. I was thinking more about
this, and realized a couple of things.

First of all, probably better not to call it "hub", but "multiplexer".
It's a more generic name; but more importantly, I realized that it
actually needs to do more than a hub: pfinet sets packet filter rules on
the network device, so it gets only packets for the IP it is interested
in. Now the multiplexer provides virtual network interfaces, which not
only means that it has to move packets in and out, but also implement
the packet filters...

This way the multiplexer actually *does* work like a router, more or
less like you suggested, forwarding only the desired packets to each
client. (The filtering works per client, not per virtual interface...)

I really don't know why I didn't see this earlier; I feel very stupid
now.

Note however that the routing policy is determined by the filter rules
as requested by the clients. The multiplexer itself just acts like a
hub, forwarding from each virtual interface to all other virtual
interfaces -- it's only the client set filter rules that prevent
unnecessary packets being delivered.

To enforce policies, we still need filter translators sitting between
the multiplexer and the clients. Normally, they will have little to do
though: The client should have set a rule that makes the virtual
interface (the multiplexer) only deliver packets that the client needs.
But if the client tries doing something nasty, like setting a rule which
would give it access to packets it is not allowed to see (i.e. to other
IPs), or if it sends packets that it is not allowed to send (again,
using a wrong IP), the filter will block these unallowed packets.

This is not the only possible design I can think of, but it seems the
most reasonable to me now.

>> The original idea was that the hypervisor can create multiple virtual
>> interfaces with different filter rules, *and* several pfinets can
>> connect to the same virtual interface if desired. (Just as several
>> pfinets can connect to the same real interface.) This would have made
>> for a rather complicated setup...
>>   
> I don't think it can complicate the setup if several pfinets connect
> to  the same virtual interface at the same time.

Indeed, I meant the other half: Several virtual interfaces with
individual policies would complicate the setup.

I have been unclear however (as the ideas haven't been quite clear in my
own head): It's actually the "individual policies" bit that would
complicate it. If we move policies to external filters, there is no
problem at all with multiple virtual interfaces -- no per-interface
setup is necessary that way. The multiplexer just creates interfaces
whenever clients ask for them. (So if some clients open "eth0./0" and
some open "eth0./1" for example, two virtual interfaces are created.)

That's why I think it is better to have a simple multiplexer without any
policies, and use additional filter translators to enforce policies
where necessary. This way the setup remains simple: The multiplexer
doesn't need any rules, and each individual filter translator needs only
one simple rule. If you need different policies for different clients,
you just set up several filter translators.

Note that you can actually set several filter translators on top of a
single (real or virtual) interface. The multiplexer is really only
necessary if you want forwarding between virtual interfaces. (If you
want all clients to be able to talk to each other, you need a virtual
interface for each one.)

> If a hypervisor has only one virtual interface, several hypervisors
> are  needed to setup, and the packets received by one hypervisor
> should be  forwarded to others. Every hypervisor can have a filter to
> control the  traffic.

When I described the idea of splitting the multiplexer (hub)
functionality from the filtering functionality, I continued to use the
term "hypervisor" for the filtering functionality. I think this created
some confusion. The idea (which was vague in the last mail but clearer
now) is to have the multiplexer provide several virtual interfaces, and
the filter translators work with only one interface each. (But both can
have several clients on an interface.)

The "hypervisor" as originally planned doesn't exist anymore.

> I think the hypervisor with multiple interfaces is more flexible.

Well, with the confusion I created myself, I'm not sure anymore what you
mean here...

If you mean keeping the original idea of a hypervisor that does provide
several virtual interfaces *and* enforces policies on each, then no,
this is not more flexible. It is a monolithic beast combining different,
mostly unrelated functionalities. It is un-hurdish; and it can never be
more flexible than individual components that can be combined as
necessary.

The only disadvantage of separate components is that the packets may
need to traverse more servers before they arrive. But, as explained on
IRC, this probably could be optimized with some BPF magic, if it turns
out to be a serious problem.

I'm totally convinced now that the modular approach is better.

-antrik-




reply via email to

[Prev in Thread] Current Thread [Next in Thread]