qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhos


From: Jason Wang
Subject: Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
Date: Mon, 22 May 2017 10:27:52 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0



On 2017年05月19日 23:33, Stefan Hajnoczi wrote:
On Fri, May 19, 2017 at 11:10:33AM +0800, Jason Wang wrote:
On 2017年05月18日 11:03, Wei Wang wrote:
On 05/17/2017 02:22 PM, Jason Wang wrote:
On 2017年05月17日 14:16, Jason Wang wrote:
On 2017年05月16日 15:12, Wei Wang wrote:
Hi:

Care to post the driver codes too?

OK. It may take some time to clean up the driver code before
post it out. You can first
have a check of the draft at the repo here:
https://github.com/wei-w-wang/vhost-pci-driver

Best,
Wei
Interesting, looks like there's one copy on tx side. We used to
have zerocopy support for tun for VM2VM traffic. Could you
please try to compare it with your vhost-pci-net by:

We can analyze from the whole data path - from VM1's network stack to
send packets -> VM2's
network stack to receive packets. The number of copies are actually the
same for both.
That's why I'm asking you to compare the performance. The only reason for
vhost-pci is performance. You should prove it.
There is another reason for vhost-pci besides maximum performance:

vhost-pci makes it possible for end-users to run networking or storage
appliances in compute clouds.  Cloud providers do not allow end-users to
run custom vhost-user processes on the host so you need vhost-pci.

Stefan

Then it has non NFV use cases and the question goes back to the performance comparing between vhost-pci and zerocopy vhost_net. If it does not perform better, it was less interesting at least in this case.

Thanks



reply via email to

[Prev in Thread] Current Thread [Next in Thread]