qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 6/6] spec/vhost-user spec: Add IOMMU support


From: Jason Wang
Subject: Re: [Qemu-devel] [PATCH 6/6] spec/vhost-user spec: Add IOMMU support
Date: Wed, 17 May 2017 10:53:44 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0



On 2017年05月16日 23:16, Michael S. Tsirkin wrote:
On Mon, May 15, 2017 at 01:45:28PM +0800, Jason Wang wrote:

On 2017年05月13日 08:02, Michael S. Tsirkin wrote:
On Fri, May 12, 2017 at 04:21:58PM +0200, Maxime Coquelin wrote:
On 05/11/2017 08:25 PM, Michael S. Tsirkin wrote:
On Thu, May 11, 2017 at 02:32:46PM +0200, Maxime Coquelin wrote:
This patch specifies and implements the master/slave communication
to support device IOTLB in slave.

The vhost_iotlb_msg structure introduced for kernel backends is
re-used, making the design close between the two backends.

An exception is the use of the secondary channel to enable the
slave to send IOTLB miss requests to the master.

Signed-off-by: Maxime Coquelin <address@hidden>
---
    docs/specs/vhost-user.txt | 75 
+++++++++++++++++++++++++++++++++++++++++++++++
    hw/virtio/vhost-user.c    | 31 ++++++++++++++++++++
    2 files changed, 106 insertions(+)

diff --git a/docs/specs/vhost-user.txt b/docs/specs/vhost-user.txt
index 5fa7016..4a1f0c3 100644
--- a/docs/specs/vhost-user.txt
+++ b/docs/specs/vhost-user.txt
@@ -97,6 +97,23 @@ Depending on the request type, payload can be:
       log offset: offset from start of supplied file descriptor
           where logging starts (i.e. where guest address 0 would be logged)
+ * An IOTLB message
+   ---------------------------------------------------------
+   | iova | size | user address | permissions flags | type |
+   ---------------------------------------------------------
+
+   IOVA: a 64-bit guest I/O virtual address
guest -> VM
Ok.

+   Size: a 64-bit size
How do you specify "all memory"? give special meaning to size 0?
Good point, it does not support all memory currently.
It is not vhost-user specific, but general to the vhost implementation.
But iommu needs it to support passthrough.
Probably not, we will just pass the mappings in vhost_memory_region to
vhost. Its memory_size is also a __u64.

Thanks
That's different since that's chunks of qemu virtual memory.

IOMMU maps IOVA to GPA.


But we're in fact cache IOVA -> HVA mapping in the remote IOTLB. When passthrough mode is enabled, IOVA == GPA, so passing mappings in vhost_memory_region should be fine.

The only possible "issue" with "all memory" is if you can not use a single TLB invalidation to invalidate all caches in remote TLB. But this is only theoretical problem since it only happen when we have a 1 byte mapping [2^64 - 1, 2^64) cached in remote TLB. Consider:

- E.g intel IOMMU has a range limitation for invalidation (1G currently)
- Looks like all existed IOMMU use page aligned mappings

It was probably not a big issue. And for safety we could use two invalidations to make sure all caches were flushed remotely. Or just change the protocol from start, size to start, end. Vhost-kernel is probably too late for this change, but I'm still not quite sure it is worthwhile.

Thanks



reply via email to

[Prev in Thread] Current Thread [Next in Thread]