qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] vhost-pci and virtio-vhost-user


From: Wei Wang
Subject: Re: [Qemu-devel] vhost-pci and virtio-vhost-user
Date: Wed, 17 Jan 2018 16:44:45 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0

On 01/16/2018 01:33 PM, Jason Wang wrote:


On 2018年01月15日 18:43, Wei Wang wrote:
On 01/15/2018 04:34 PM, Jason Wang wrote:


On 2018年01月15日 15:59, Wei Wang wrote:
On 01/15/2018 02:56 PM, Jason Wang wrote:


On 2018年01月12日 18:18, Stefan Hajnoczi wrote:


I just fail understand why we can't do software defined network or storage with exist virtio device/drivers (or are there any shortcomings that force us to invent new infrastructure).


Existing virtio-net works with a host central vSwitch, and it has the following disadvantages:
1) long code/data path;
2) poor scalability; and
3) host CPU sacrifice

Please show me the numbers.

Sure. For 64B packet transmission between two VMs: vhost-user reports ~6.8Mpps, and vhost-pci reports ~11Mpps, which is ~1.62x faster.


This result is kind of incomplete. So still many questions left:

- What's the configuration of the vhost-user?
- What's the result of e.g 1500 byte?
- You said it improves scalability, at least I can't get this conclusion just from what you provide here
- You suspect long code/data path, but no latency numbers to prove it


Had an offline meeting with Jason. The future discussion will be more focused on the design.

Here is a conclusion about more results we collected for 64B packet transmission, compared to ovs-dpdk (though we are comparing to ovs-dpdk here, but vhost-pci isn't meant to replace ovs-dpdk. It's for inter-VM communication, and packets going to the outside world will go from the traditional backend like ovs-dpdk):

1) 2VM communication: over 1.6x higher throughput;
2) 22% shorter latency;
3) in the 5-VM chain communication tests, vhost-pci shows ~6.5x higher throughput thanks to its better scalability

We'll provide 1500B test results later.

Best,
Wei







reply via email to

[Prev in Thread] Current Thread [Next in Thread]