qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] The status about vhost-net on kvm-arm?


From: GAUGUEY Rémy 228890
Subject: Re: [Qemu-devel] The status about vhost-net on kvm-arm?
Date: Fri, 17 Oct 2014 12:49:59 +0000

Thanks for your feedback, 

>static irqreturn_t vm_interrupt(int irq, void *opaque) {
>       ......
>
>       /* Read and acknowledge interrupts */
>       /*status = readl(vm_dev->base + VIRTIO_MMIO_INTERRUPT_STATUS);
>       writel(status, vm_dev->base + VIRTIO_MMIO_INTERRUPT_ACK);
>
>       if (unlikely(status & VIRTIO_MMIO_INT_CONFIG)
>                       && vdrv && vdrv->config_changed) {
>               vdrv->config_changed(&vm_dev->vdev);
>               ret = IRQ_HANDLED;
>       }*/
>
>       //if (likely(status & VIRTIO_MMIO_INT_VRING)) {
>               spin_lock_irqsave(&vm_dev->lock, flags);
>               list_for_each_entry(info, &vm_dev->virtqueues, node)
>                       ret |= vring_interrupt(irq, info->vq);
>               spin_unlock_irqrestore(&vm_dev->lock, flags);
>       //}
>
>       return ret;
>}
>
>This is very roughly :), and a lot of coding things need to be done.

I agree ;-)
Anyway, with this "workaround" you disable the control plane interrupt, which 
is needed to bring up/down the virtio link... unless VIRTIO_NET_F_STATUS 
feature is off.
I was thinking about connecting those 2 registers to an ioeventfd in order to 
emulate them in Vhost and bypass Qemu... but AFAIK ioeventfd can only work with 
"write" registers.
Any idea for a long term solution ?

best regards.
Rémy

-----Message d'origine-----
De : Li Liu [mailto:address@hidden 
Envoyé : vendredi 17 octobre 2014 14:27
À : GAUGUEY Rémy 228890; Yingshiuan Pan
Cc : address@hidden; address@hidden; qemu-devel
Objet : Re: [Qemu-devel] The status about vhost-net on kvm-arm?



On 2014/10/15 22:39, GAUGUEY Rémy 228890 wrote:
> Hello,
> 
> Using this Qemu patchset as well as recent irqfd work, I’ve tried to make 
> vhost-net working on Cortex-A15.
> Unfortunately, even if I can correctly generate irqs to the guest through 
> irqfd, it seems to me that some pieces are still missing….
> Indeed, virtio mmio interrupt status register (@ offset 0x60) is not 
> updated by vhost thread, and reading it or writing to the peer 
> interrupt ack register (offset 0x64) from the guest causes an VM exit 
>
> 

Yeah, you are correct. But it's not far away from success if have injected irqs 
to the guest through irqfd. Do below things to let guest receive packets 
correctly without checking VIRTIO_MMIO_INTERRUPT_STATUS in guest virtio_mmio.c:

static irqreturn_t vm_interrupt(int irq, void *opaque) {
        ......

        /* Read and acknowledge interrupts */
        /*status = readl(vm_dev->base + VIRTIO_MMIO_INTERRUPT_STATUS);
        writel(status, vm_dev->base + VIRTIO_MMIO_INTERRUPT_ACK);

        if (unlikely(status & VIRTIO_MMIO_INT_CONFIG)
                        && vdrv && vdrv->config_changed) {
                vdrv->config_changed(&vm_dev->vdev);
                ret = IRQ_HANDLED;
        }*/

        //if (likely(status & VIRTIO_MMIO_INT_VRING)) {
                spin_lock_irqsave(&vm_dev->lock, flags);
                list_for_each_entry(info, &vm_dev->virtqueues, node)
                        ret |= vring_interrupt(irq, info->vq);
                spin_unlock_irqrestore(&vm_dev->lock, flags);
        //}

        return ret;
}

This is very roughly :), and a lot of coding things need to be done.

Li.

> After reading older posts, I understand that vhost-net with irqfd support 
> could only work with MSI-X support :
> 
> On 01/20/2011 09:35 AM, Michael S. Tsirkin wrote:
> “When MSI is off, each interrupt needs to be bounced through the io 
> thread when it's set/cleared, so vhost-net causes more context switches and 
> higher CPU utilization than userspace virtio which handles networking in the 
> same thread.
>
> Indeed, in case of MSI-X support, Virtio spec indicates that the ISR 
> Status field is unused…
> 
> I understand that Vhost does not emulate a complete virtio PCI adapter but 
> only manage virtqueue operations.
> However I don’t have a clear view of what is performed by Qemu and 
> what is performed by vhost-thread… Could someone highlight me on this point, 
> and maybe give some clues for an implementation of Vhost with irqfd and 
> without MSI support ???
> 
> Thanks a lot in advance.
> Best regards.
> Rémy
> 
> 
> 
> De : address@hidden 
> [mailto:address@hidden De la part de Yingshiuan 
> Pan Envoyé : vendredi 15 août 2014 09:25 À : Li Liu Cc : 
> address@hidden; address@hidden; qemu-devel Objet : 
> Re: [Qemu-devel] The status about vhost-net on kvm-arm?
> 
> Hi, Li,
> 
> It's ok, I did get those mails from mailing list. I guess it was because I 
> did not subscribe some of mailing lists.
> 
> Currently, I think I will not have any plan to renew my patcheset since I 
> have resigned from my previous company, I do not have Cortex-A15 platform to 
> test/verify.
> 
> I'm fine with that, it would be great if you or someone can take it and 
> improve it.
> Thanks.
> 
> ----
> Best Regards,
> Yingshiuan Pan
> 
> 2014-08-15 11:04 GMT+08:00 Li Liu <address@hidden<mailto:address@hidden>>:
> Hi Ying-Shiuan Pan,
> 
> I don't know why for missing your mail in mailbox. Sorry about that.
> The results of vhost-net performance have been attached in another mail.
> 
> Do you have a plan to renew your patchset to support irqfd. If not, we 
> will try to finish it based on yours.
> 
> On 2014/8/14 11:50, Li Liu wrote:
>>
>>
>> On 2014/8/13 19:25, Nikolay Nikolaev wrote:
>>> On Wed, Aug 13, 2014 at 12:10 PM, Nikolay Nikolaev 
>>> <address@hidden<mailto:address@hidden>> wrote:
>>>> On Tue, Aug 12, 2014 at 6:47 PM, Nikolay Nikolaev 
>>>> <address@hidden<mailto:address@hidden>> wrote:
>>>>>
>>>>> Hello,
>>>>>
>>>>>
>>>>> On Tue, Aug 12, 2014 at 5:41 AM, Li Liu 
>>>>> <address@hidden<mailto:address@hidden>> wrote:
>>>>>>
>>>>>> Hi all,
>>>>>>
>>>>>> Is anyone there can tell the current status of vhost-net on kvm-arm?
>>>>>>
>>>>>> Half a year has passed from Isa Ansharullah asked this question:
>>>>>> http://www.spinics.net/lists/kvm-arm/msg08152.html
>>>>>>
>>>>>> I have found two patches which have provided the kvm-arm support 
>>>>>> of eventfd and irqfd:
>>>>>>
>>>>>> 1) [RFC PATCH 0/4] ARM: KVM: Enable the ioeventfd capability of 
>>>>>> KVM on ARM 
>>>>>> http://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01770.htm
>>>>>> l
>>>>>>
>>>>>> 2) [RFC,v3] ARM: KVM: add irqfd and irq routing support 
>>>>>> https://patches.linaro.org/32261/
>>>>>>
>>>>>> And there's a rough patch for qemu to support eventfd from Ying-Shiuan 
>>>>>> Pan:
>>>>>>
>>>>>> [Qemu-devel] [PATCH 0/4] ioeventfd support for virtio-mmio 
>>>>>> https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.ht
>>>>>> ml
>>>>>>
>>>>>> But there no any comments of this patch. And I can found nothing 
>>>>>> about qemu to support irqfd. Do I lost the track?
>>>>>>
>>>>>> If nobody try to fix it. We have a plan to complete it about 
>>>>>> virtio-mmio supporing irqfd and multiqueue.
>>>>>>
>>>>>>
>>>>>
>>>>> we at Virtual Open Systems did some work and tested vhost-net on 
>>>>> ARM back in March.
>>>>> The setup was based on:
>>>>>  - host kernel with our ioeventfd patches:
>>>>> http://www.spinics.net/lists/kvm-arm/msg08413.html
>>>>>
>>>>> - qemu with the aforementioned patches from Ying-Shiuan Pan 
>>>>> https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.htm
>>>>> l
>>>>>
>>>>> The testbed was ARM Chromebook with Exynos 5250, using a 1Gbps 
>>>>> USB3 Ethernet adapter connected to a 1Gbps switch. I can't find 
>>>>> the actual numbers but I remember that with multiple streams the 
>>>>> gain was clearly seen. Note that it used the minimum required 
>>>>> ioventfd implementation and not irqfd.
>>>>>
>>>>> I guess it is feasible to think that it all can be put together 
>>>>> and rebased + the recent irqfd work. One can achiev even better 
>>>>> performance (because of the irqfd).
>>>>>
>>>>
>>>> Managed to replicate the setup with the old versions e used in March:
>>>>
>>>> Single stream from another machine to chromebook with 1Gbps USB3 
>>>> Ethernet adapter.
>>>> iperf -c <address> -P 1 -i 1 -p 5001 -f k -t 10 to HOST: 858316 
>>>> Kbits/sec to GUEST: 761563 Kbits/sec
>>> to GUEST vhost=off: 508150 Kbits/sec
>>>>
>>>> 10 parallel streams
>>>> iperf -c <address> -P 10 -i 1 -p 5001 -f k -t 10 to HOST: 842420 
>>>> Kbits/sec to GUEST: 625144 Kbits/sec
>>> to GUEST vhost=off: 425276 Kbits/sec
>>
>> I have tested the same cases on a Hisilicon board (address@hidden) 
>> with Integrated 1Gbps Ethernet adapter.
>>
>> iperf -c <address> -P 1 -i 1 -p 5001 -f M -t 10 to HOST: 906 
>> Mbits/sec to GUEST: 562 Mbits/sec to GUEST vhost=off: 340 Mbits/sec
>>
>> 10 parallel streams, the performance gets <10% plus:
>> iperf -c <address> -P 10 -i 1 -p 5001 -f M -t 10 to HOST: 923 
>> Mbits/sec to GUEST: 592 Mbits/sec to GUEST vhost=off: 364 Mbits/sec
>>
>> I't easy to see vhost-net brings great performance improvements, 
>> almost 50%+.
>>
>> Li.
>>
>>>>
>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> kvmarm mailing list
>>>>>> address@hidden<mailto:address@hidden>
>>>>>> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
>>>>>
>>>>>
>>>>> regards,
>>>>> Nikolay Nikolaev
>>>>> Virtual Open Systems
>>>
>>> .
>>>
>>
>>
>>
>> .
>>
> 


reply via email to

[Prev in Thread] Current Thread [Next in Thread]