qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 3/3] net: virtio-net discards TX data after l


From: Yuri Benditovich
Subject: Re: [Qemu-devel] [PATCH v2 3/3] net: virtio-net discards TX data after link down
Date: Wed, 23 Nov 2016 11:52:25 +0200

On Thu, Nov 10, 2016 at 3:54 PM, Michael S. Tsirkin <address@hidden> wrote:

> On Thu, Nov 10, 2016 at 01:56:05AM +0200, Yuri Benditovich wrote:
> >
> >
> > On Wed, Nov 9, 2016 at 10:28 PM, Michael S. Tsirkin <address@hidden>
> wrote:
> >
> >     On Wed, Nov 09, 2016 at 05:22:02PM +0200,
> address@hidden
> >     wrote:
> >     > From: Yuri Benditovich <address@hidden>
> >     >
> >     > https://bugzilla.redhat.com/show_bug.cgi?id=1295637
> >     > Upon set_link monitor command or upon netdev deletion
> >     > virtio-net sends link down indication to the guest
> >     > and stops vhost if one is used.
> >     > Guest driver can still submit data for TX until it
> >     > recognizes link loss. If these packets not returned by
> >     > the host, the Windows guest will never be able to finish
> >     > disable/removal/shutdown.
> >     > Now each packet sent by guest after NIC indicated link
> >     > down will be completed immediately.
> >     >
> >     > Signed-off-by: Yuri Benditovich <address@hidden>
> >     > ---
> >     >  hw/net/virtio-net.c | 28 ++++++++++++++++++++++++++++
> >     >  1 file changed, 28 insertions(+)
> >     >
> >     > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> >     > index 06bfe4b..ab4e18a 100644
> >     > --- a/hw/net/virtio-net.c
> >     > +++ b/hw/net/virtio-net.c
> >     > @@ -218,6 +218,16 @@ static void virtio_net_vnet_endian_status(
> VirtIONet
> >     *n, uint8_t status)
> >     >      }
> >     >  }
> >     >
> >     > +static void virtio_net_drop_tx_queue_data(VirtIODevice *vdev,
> VirtQueue
> >     *vq)
> >     > +{
> >     > +    VirtQueueElement *elem;
> >     > +    while ((elem = virtqueue_pop(vq, sizeof(VirtQueueElement)))) {
> >     > +        virtqueue_push(vq, elem, 0);
> >     > +        virtio_notify(vdev, vq);
> >     > +        g_free(elem);
> >     > +    }
> >     > +}
> >     > +
> >     >  static void virtio_net_set_status(struct VirtIODevice *vdev,
> uint8_t
> >     status)
> >     >  {
> >     >      VirtIONet *n = VIRTIO_NET(vdev);
> >
> >     I don't like this part. This does too much queue parsing,
> >     I would like to just copy head from avail to used ring.
> >
> >     For example, people want to support rings >1K in size.
> >     Let's add bool virtqueue_drop(vq) and be done with it.
> >
> >
> > Please note that this code works only when link is down.
> > For me this was too complicated to write simpler procedure
> > with the same result.
>
> Yes - it's somewhat problematic and risky that we process
> the ring in qemu, but I don't see an easy way around that.
> But at least let's limit the processing and assumptions we
> make.
>
>
So, what is the status and how do we make the progress.
What kind of change in the patch you suggest?

Thanks,
Yuri


>
> >
> >
> >     > @@ -262,6 +272,14 @@ static void virtio_net_set_status(struct
> >     VirtIODevice *vdev, uint8_t status)
> >     >              } else {
> >     >                  qemu_bh_cancel(q->tx_bh);
> >     >              }
> >     > +            if ((n->status & VIRTIO_NET_S_LINK_UP) == 0 &&
> >     > +                (queue_status & VIRTIO_CONFIG_S_DRIVER_OK)) {
> >     > +                /* if tx is waiting we are likely have some
> packets in
>
> ... we likely have some ...
>
> >     tx queue
> >     > +                 * and disabled notification */
>
> what does this refer to?
>
> >     > +                q->tx_waiting = 0;
> >     > +                virtio_queue_set_notification(q->tx_vq, 1);
> >     > +                virtio_net_drop_tx_queue_data(vdev, q->tx_vq);
> >     > +            }
> >     >          }
> >     >      }
> >     >  }
> >
> >     OK but what if guest keeps sending packets? What will drop them?
> >
> >
> > This code fixes following problem in original code (example):
> > We are in vhost=off and receive kick ->virtio_net_handle_tx_timer
> > -> tx_waiting=1, notification disabled, timer set
> > Now we receive link loss, cancel the timer and stay with packets in the
> queue
> > and with
> > disabled notification. Nobody will return them. (easy to reproduce with
> timer
> > set to 5ms)
> >
> > Added code drops packets we already have and ensure we will report them
> > as completed to guest. If guest keeps sending packets, they will be
> dropped
> > in virtio_net_handle_tx_timer and in  virtio_net_handle_tx_bh (in
> procedures
> > just below)
> > as we already with link down.
>
> Yes I get that. I'm just not 100% sure all paths have
> us listen on the ioeventfd and handle kicks without races -
> this was previously assumed not to matter.
>
>
> >
> >
> >     > @@ -1319,6 +1337,11 @@ static void virtio_net_handle_tx_timer(
> >     VirtIODevice *vdev, VirtQueue *vq)
> >     >      VirtIONet *n = VIRTIO_NET(vdev);
> >     >      VirtIONetQueue *q = &n->vqs[vq2q(virtio_get_queue_
> index(vq))];
> >     >
> >     > +    if (unlikely((n->status & VIRTIO_NET_S_LINK_UP) == 0)) {
> >     > +        virtio_net_drop_tx_queue_data(vdev, vq);
> >     > +        return;
> >     > +    }
> >     > +
> >     >      /* This happens when device was stopped but VCPU wasn't. */
> >     >      if (!vdev->vm_running) {
> >     >          q->tx_waiting = 1;
> >     > @@ -1345,6 +1368,11 @@ static void virtio_net_handle_tx_bh(
> VirtIODevice
> >     *vdev, VirtQueue *vq)
> >     >      VirtIONet *n = VIRTIO_NET(vdev);
> >     >      VirtIONetQueue *q = &n->vqs[vq2q(virtio_get_queue_
> index(vq))];
> >     >
> >     > +    if (unlikely((n->status & VIRTIO_NET_S_LINK_UP) == 0)) {
> >     > +        virtio_net_drop_tx_queue_data(vdev, vq);
> >     > +        return;
> >     > +    }
> >     > +
> >     >      if (unlikely(q->tx_waiting)) {
> >     >          return;
> >     >      }
> >     > --
> >     > 1.9.1
> >
> >
>


reply via email to

[Prev in Thread] Current Thread [Next in Thread]