qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhos


From: Wei Wang
Subject: Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
Date: Tue, 23 May 2017 19:09:05 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0

On 05/20/2017 04:44 AM, Michael S. Tsirkin wrote:
On Fri, May 19, 2017 at 05:00:37PM +0800, Wei Wang wrote:
That being said, we compared to vhost-user, instead of vhost_net,
because vhost-user is the one
that is used in NFV, which we think is a major use case for vhost-pci.
If this is true, why not draft a pmd driver instead of a kernel one?
Yes, that's right. There are actually two directions of the vhost-pci driver
implementation - kernel driver
and dpdk pmd. The QEMU side device patches are first posted out for
discussion, because when the device
part is ready, we will be able to have the related team work on the pmd
driver as well. As usual, the pmd
driver would give a much better throughput.
For PMD to work though, the protocol will need to support vIOMMU.
Not asking you to add it right now since it's work in progress
for vhost user at this point, but something you will have to
keep in mind. Further, reviewing vhost user iommu patches might be
a good idea for you.


For the dpdk pmd case, I'm not sure if vIOMMU is necessary to be used -
Since it only needs to share a piece of memory between the two VMs, we
can only send that piece of memory info for sharing, instead of sending the
entire VM's memory and using vIOMMU to expose that accessible portion.

Best,
Wei



reply via email to

[Prev in Thread] Current Thread [Next in Thread]