qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v6] net: add support of mac-programming over mac


From: Luiz Capitulino
Subject: Re: [Qemu-devel] [PATCH v6] net: add support of mac-programming over macvtap in QEMU side
Date: Mon, 17 Jun 2013 10:34:28 -0400

On Mon, 17 Jun 2013 17:20:13 +0300
"Michael S. Tsirkin" <address@hidden> wrote:

> On Mon, Jun 17, 2013 at 09:30:42AM -0400, Luiz Capitulino wrote:
> > On Mon, 17 Jun 2013 16:21:31 +0300
> > "Michael S. Tsirkin" <address@hidden> wrote:
> > 
> > > On Mon, Jun 17, 2013 at 09:11:27AM -0400, Luiz Capitulino wrote:
> > > > On Fri, 14 Jun 2013 15:45:52 +0800
> > > > Amos Kong <address@hidden> wrote:
> > > > 
> > > > > Currently macvtap based macvlan device is working in promiscuous
> > > > > mode, we want to implement mac-programming over macvtap through
> > > > > Libvirt for better performance.
> > > > > 
> > > > > Design:
> > > > >  QEMU notifies Libvirt when rx-filter config is changed in guest,
> > > > >  then Libvirt query the rx-filter information by a monitor command,
> > > > >  and sync the change to macvtap device. Related rx-filter config
> > > > >  of the nic contains main mac, rx-mode items and vlan table.
> > > > > 
> > > > > This patch adds a QMP event to notify management of rx-filter change,
> > > > > and adds a monitor command for management to query rx-filter
> > > > > information.
> > > > > 
> > > > > Test:
> > > > >  If we repeatedly add/remove vlan, and change macaddr of vlan
> > > > >  interfaces in guest by a loop script.
> > > > > 
> > > > > Result:
> > > > >  The events will flood the QMP client(management), management takes
> > > > >  too much resource to process the events.
> > > > > 
> > > > >  Event_throttle API (set rate to 1 ms) can avoid the events to flood
> > > > 
> > > > I doubt this is a valid value. Today, the three events that use the 
> > > > event
> > > > throttle API set the delay rate to 1000 ms.
> > > > 
> > > > >  QMP client, but it could cause an unexpected delay (~1ms), guests
> > > > >  guests normally expect rx-filter updates immediately.
> > > > 
> > > > What you mean by "immediately"? There's a round-trip to the host plus
> > > > all the stuff QEMU will execute to fulfil the request. And how did you
> > > > measure this, btw?
> > > > 
> > > > What you have to do is is to measure your test-case in three different
> > > > scenarios:
> > > > 
> > > >  1. Against upstream QEMU (ie. no patches)
> > > >  2. With the event throttle API
> > > >  3. With this patch
> > > > 
> > > > Only then you'll be able which is better.
> > > 
> > > 
> > > I doubt there's a lot of value in splitting this patch in two and
> > > testing whether we can reproduce any problems with a partial patch. 
> > 
> > I didn't suggest that. Actually, I didn't even talk about code.
> >
> > > The
> > > problem of management not keeping up with event flood triggered by mac
> > > updates by guest might be purely theoretical,
> > 
> > That's not the problem the flag thing is trying to solve. We have the
> > event throttle API for that. What the flag is trying to solve is _guest_
> > latency due to event generation and it's not clear at all if this is a real
> > problem.
> > 
> > > but a robust solution that
> > > does not have theoretical holes makes me sleep better at night, and it's a
> > > trivial amount of code.
> > 
> > It has the drawback of coupling the query command with the event.
> 
> What specifically is the problem?

Events and commands are independent entities. Coupling them can have bad
side effects, like we may have the ability to disable commands in
the future. In this case this event would only be sent once!

We could, of course, disable the event along, but that's an unclear side
effect too.

> I only see a fix for a problem, that might be theoretical, but I'm
> happier knowing I don't need to worry about it.
> 
> > IMO, such
> > a coupling has to be justified with real data.
> 
> It's a quality of implementation issue.
> 
> No spec says that you should not delay RX filter updates for a very long
> time (yes 1000ms is very long), but if a NIC does it it's a low quality
> NIC, and it will cause trouble in the field.

The 1000ms I talked about is *not* what the guest will see. If there are
events pending, the throttle API just queues the event and returns right
away. I'd even _guess_ that this is faster then emitting the event.

The timeout you have to specify in the throttle API is what *mngt* will
see if events are flooded.

> I don't think we need to spend time proving that with real data, it's just
> obvious.

Why is it so difficult to show if it's that obvious?

Let me clarify that I don't oppose to the implementation, as long as it's
shown that it's really needed.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]