qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCHv2 for-1.5] virtio-pci: fix level interrupts


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [PATCHv2 for-1.5] virtio-pci: fix level interrupts
Date: Tue, 7 May 2013 15:27:31 +0300

On Tue, May 07, 2013 at 02:20:25PM +0200, KONRAD Frédéric wrote:
> On 07/05/2013 12:20, Michael S. Tsirkin wrote:
> >mask notifiers are never called without msix,
> >so devices with backend masking like vhost don't work.
> >Call mask notifiers explicitly at
> >startup/cleanup to make it work.
> >
> >Signed-off-by: Michael S. Tsirkin <address@hidden>
> >Tested-by: Alexander Graf <address@hidden>
> >---
> >
> >Changes from v1:
> >     - rebase to master
> >
> >  hw/virtio/virtio-pci.c | 5 +++++
> >  1 file changed, 5 insertions(+)
> >
> >diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
> >index d8708c1..c97aee1 100644
> >--- a/hw/virtio/virtio-pci.c
> >+++ b/hw/virtio/virtio-pci.c
> >@@ -744,6 +744,7 @@ static int virtio_pci_set_guest_notifier(DeviceState *d, 
> >int n, bool assign,
> >                                           bool with_irqfd)
> >  {
> >      VirtIOPCIProxy *proxy = to_virtio_pci_proxy(d);
> >+    VirtioDeviceClass *vdc = VIRTIO_DEVICE_GET_CLASS(d);
> 
> I think there is a mistake here.
> VIRTIO_DEVICE_GET_CLASS(proxy->vdev) should be used.

Hmm yes. I just realized I forgot vhostforce=on in my test
script, so wasn't testing it at all actually :(
Once I force it, it crashes happily.

So self-NAK, sorry about the noise.

> >      VirtQueue *vq = virtio_get_queue(proxy->vdev, n);
> >      EventNotifier *notifier = virtio_queue_get_guest_notifier(vq);
> >@@ -758,6 +759,10 @@ static int virtio_pci_set_guest_notifier(DeviceState 
> >*d, int n, bool assign,
> >          event_notifier_cleanup(notifier);
> >      }
> >+    if (!msix_enabled(&proxy->pci_dev) && vdc->guest_notifier_mask) {
> >+        vdc->guest_notifier_mask(proxy->vdev, n, !assign);
> >+    }
> >+
> >      return 0;
> >  }



reply via email to

[Prev in Thread] Current Thread [Next in Thread]