qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Re: [PATCH v4 2/4] virtio-pci: Use ioeventfd for virtqu


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] Re: [PATCH v4 2/4] virtio-pci: Use ioeventfd for virtqueue notify
Date: Sun, 12 Dec 2010 15:06:08 +0000

On Sun, Dec 12, 2010 at 11:22 AM, Michael S. Tsirkin <address@hidden> wrote:
> On Wed, Nov 17, 2010 at 04:19:27PM +0000, Stefan Hajnoczi wrote:
>> Virtqueue notify is currently handled synchronously in userspace virtio.  
>> This
>> prevents the vcpu from executing guest code while hardware emulation code
>> handles the notify.
>>
>> On systems that support KVM, the ioeventfd mechanism can be used to make
>> virtqueue notify a lightweight exit by deferring hardware emulation to the
>> iothread and allowing the VM to continue execution.  This model is similar to
>> how vhost receives virtqueue notifies.
>>
>> The result of this change is improved performance for userspace virtio 
>> devices.
>> Virtio-blk throughput increases especially for multithreaded scenarios and
>> virtio-net transmit throughput increases substantially.
>>
>> Some virtio devices are known to have guest drivers which expect a notify to 
>> be
>> processed synchronously and spin waiting for completion.  Only enable 
>> ioeventfd
>> for virtio-blk and virtio-net for now.
>>
>> Care must be taken not to interfere with vhost-net, which uses host
>> notifiers.  If the set_host_notifier() API is used by a device
>> virtio-pci will disable virtio-ioeventfd and let the device deal with
>> host notifiers as it wishes.
>>
>> After migration and on VM change state (running/paused) virtio-ioeventfd
>> will enable/disable itself.
>>
>>  * VIRTIO_CONFIG_S_DRIVER_OK -> enable virtio-ioeventfd
>>  * !VIRTIO_CONFIG_S_DRIVER_OK -> disable virtio-ioeventfd
>>  * virtio_pci_set_host_notifier() -> disable virtio-ioeventfd
>>  * vm_change_state(running=0) -> disable virtio-ioeventfd
>>  * vm_change_state(running=1) -> enable virtio-ioeventfd
>>
>> Signed-off-by: Stefan Hajnoczi <address@hidden>
>
>
> So this is pretty clean. Two things that worry me a bit:
>
> - With vhost-net, it seems that we might now run in userspace just before 
> start
>  or just after stop. This means that e.g. if there's a ring processing
>  bug in userspace we'll get hit by it, which I'd like to avoid.
> - There seems to be a bit of duplication where we call start/stop
>  in a similar way in lots of places. There are really
>  all places where we call set_status. Might be nice to
>  find a way to reduce that duplication.
>
> I'm trying to think of ways to improve this a bit,
> if I don't find any way to improve it I guess
> I'll just apply the series as is.

Having tried a few different approaches I've found that some of the
places where it should be possible to share code actually have side
effects or are used slightly differently.  It's one of those cases
where you refactor it one way and discover that has complicated things
in another area.  If I come up with a simpler version I'll send
patches.

>> +static int virtio_pci_stop_ioeventfd(VirtIOPCIProxy *proxy)
>> +{
>> +    int n;
>> +
>> +    if (!proxy->ioeventfd_started) {
>> +        return 0;
>> +    }
>> +
>> +    for (n = 0; n < VIRTIO_PCI_QUEUE_MAX; n++) {
>> +        if (!virtio_queue_get_num(proxy->vdev, n)) {
>> +            continue;
>> +        }
>> +
>> +        virtio_pci_set_host_notifier_fd_handler(proxy, n, false);
>> +        virtio_pci_set_host_notifier_internal(proxy, n, false);
>> +
>> +        /* Now notify the vq to handle the race condition where the guest
>> +         * kicked and we deassigned before we got around to handling the 
>> kick.
>> +         */
>
> Can't we just call event_notifier_test_and_clear to figure out whether
> this happened?
>
> This seems cleaner than calling the notifier unconditionally.

Fixed in v5.

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]