qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhos


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
Date: Fri, 19 May 2017 16:33:29 +0100
User-agent: Mutt/1.8.0 (2017-02-23)

On Fri, May 19, 2017 at 11:10:33AM +0800, Jason Wang wrote:
> On 2017年05月18日 11:03, Wei Wang wrote:
> > On 05/17/2017 02:22 PM, Jason Wang wrote:
> > > On 2017年05月17日 14:16, Jason Wang wrote:
> > > > On 2017年05月16日 15:12, Wei Wang wrote:
> > > > > > Hi:
> > > > > > 
> > > > > > Care to post the driver codes too?
> > > > > > 
> > > > > OK. It may take some time to clean up the driver code before
> > > > > post it out. You can first
> > > > > have a check of the draft at the repo here:
> > > > > https://github.com/wei-w-wang/vhost-pci-driver
> > > > > 
> > > > > Best,
> > > > > Wei
> > > > 
> > > > Interesting, looks like there's one copy on tx side. We used to
> > > > have zerocopy support for tun for VM2VM traffic. Could you
> > > > please try to compare it with your vhost-pci-net by:
> > > > 
> > We can analyze from the whole data path - from VM1's network stack to
> > send packets -> VM2's
> > network stack to receive packets. The number of copies are actually the
> > same for both.
> 
> That's why I'm asking you to compare the performance. The only reason for
> vhost-pci is performance. You should prove it.

There is another reason for vhost-pci besides maximum performance:

vhost-pci makes it possible for end-users to run networking or storage
appliances in compute clouds.  Cloud providers do not allow end-users to
run custom vhost-user processes on the host so you need vhost-pci.

Stefan

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]