qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH v5 11/26] vhost: Handle host notifiers in SVQ


From: Eugenio Perez Martin
Subject: Re: [RFC PATCH v5 11/26] vhost: Handle host notifiers in SVQ
Date: Tue, 2 Nov 2021 09:46:31 +0100

On Tue, Nov 2, 2021 at 8:55 AM Jason Wang <jasowang@redhat.com> wrote:
>
>
> 在 2021/10/30 上午2:35, Eugenio Pérez 写道:
> > If device supports host notifiers, this makes one jump less (kernel) to
> > deliver SVQ notifications to it.
> >
> > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > ---
> >   hw/virtio/vhost-shadow-virtqueue.h |  2 ++
> >   hw/virtio/vhost-shadow-virtqueue.c | 23 ++++++++++++++++++++++-
> >   2 files changed, 24 insertions(+), 1 deletion(-)
> >
> > diff --git a/hw/virtio/vhost-shadow-virtqueue.h 
> > b/hw/virtio/vhost-shadow-virtqueue.h
> > index 30ab9643b9..eb0a54f954 100644
> > --- a/hw/virtio/vhost-shadow-virtqueue.h
> > +++ b/hw/virtio/vhost-shadow-virtqueue.h
> > @@ -18,6 +18,8 @@ typedef struct VhostShadowVirtqueue VhostShadowVirtqueue;
> >   void vhost_svq_set_svq_kick_fd(VhostShadowVirtqueue *svq, int 
> > svq_kick_fd);
> >   const EventNotifier *vhost_svq_get_dev_kick_notifier(
> >                                                 const VhostShadowVirtqueue 
> > *svq);
> > +void vhost_svq_set_host_mr_notifier(VhostShadowVirtqueue *svq, void *addr);
> > +
> >   void vhost_svq_start(struct vhost_dev *dev, unsigned idx,
> >                        VhostShadowVirtqueue *svq, int svq_kick_fd);
> >   void vhost_svq_stop(struct vhost_dev *dev, unsigned idx,
> > diff --git a/hw/virtio/vhost-shadow-virtqueue.c 
> > b/hw/virtio/vhost-shadow-virtqueue.c
> > index fda60d11db..e3dcc039b6 100644
> > --- a/hw/virtio/vhost-shadow-virtqueue.c
> > +++ b/hw/virtio/vhost-shadow-virtqueue.c
> > @@ -29,6 +29,12 @@ typedef struct VhostShadowVirtqueue {
> >        * So shadow virtqueue must not clean it, or we would lose VirtQueue 
> > one.
> >        */
> >       EventNotifier svq_kick;
> > +
> > +    /* Device's host notifier memory region. NULL means no region */
> > +    void *host_notifier_mr;
> > +
> > +    /* Virtio queue shadowing */
> > +    VirtQueue *vq;
> >   } VhostShadowVirtqueue;
> >
> >   /**
> > @@ -50,7 +56,20 @@ static void vhost_handle_guest_kick(EventNotifier *n)
> >           return;
> >       }
> >
> > -    event_notifier_set(&svq->hdev_kick);
> > +    if (svq->host_notifier_mr) {
> > +        uint16_t *mr = svq->host_notifier_mr;
> > +        *mr = virtio_get_queue_index(svq->vq);
>
>
> Do we need barriers around the possible MMIO here?

That's right, I missed them.

>
> To avoid those complicated stuff, I'd rather simply go with eventfd path.
>
> Note mmio and eventfd are not mutually exclusive.

Actually we cannot ignore them since they are set in the guest. If SVQ
does nothing about them, the guest's notification will travel directly
to the vdpa device, and SVQ cannot intercept them.

Taking that into account, it's actually less changes to move them to
SVQ (like in this series) than to disable them (like in previous
series). But we can go with disabling them for sure.

Thanks!

>
> Thanks
>
>
> > +    } else {
> > +        event_notifier_set(&svq->hdev_kick);
> > +    }
> > +}
> > +
> > +/*
> > + * Set the device's memory region notifier. addr = NULL clear it.
> > + */
> > +void vhost_svq_set_host_mr_notifier(VhostShadowVirtqueue *svq, void *addr)
> > +{
> > +    svq->host_notifier_mr = addr;
> >   }
> >
> >   /**
> > @@ -134,6 +153,7 @@ void vhost_svq_stop(struct vhost_dev *dev, unsigned idx,
> >    */
> >   VhostShadowVirtqueue *vhost_svq_new(struct vhost_dev *dev, int idx)
> >   {
> > +    int vq_idx = dev->vq_index + idx;
> >       g_autofree VhostShadowVirtqueue *svq = g_new0(VhostShadowVirtqueue, 
> > 1);
> >       int r;
> >
> > @@ -151,6 +171,7 @@ VhostShadowVirtqueue *vhost_svq_new(struct vhost_dev 
> > *dev, int idx)
> >           goto err_init_hdev_call;
> >       }
> >
> > +    svq->vq = virtio_get_queue(dev->vdev, vq_idx);
> >       return g_steal_pointer(&svq);
> >
> >   err_init_hdev_call:
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]