qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC v2 0/8] VIRTIO-IOMMU device


From: Jean-Philippe Brucker
Subject: Re: [Qemu-devel] [RFC v2 0/8] VIRTIO-IOMMU device
Date: Mon, 19 Jun 2017 11:15:28 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.1.1

On 19/06/17 08:54, Bharat Bhushan wrote:
> Hi Eric,
> 
> I started added replay in virtio-iommu and came across how MSI interrupts 
> with work with VFIO. 
> I understand that on intel this works differently but vsmmu will have same 
> requirement. 
> kvm-msi-irq-route are added using the msi-address to be translated by viommu 
> and not the final translated address.
> While currently the irqfd framework does not know about emulated iommus 
> (virtio-iommu, vsmmuv3/vintel-iommu).
> So in my view we have following options:
> - Programming with translated address when setting up kvm-msi-irq-route
> - Route the interrupts via QEMU, which is bad from performance
> - vhost-virtio-iommu may solve the problem in long term
> 
> Is there any other better option I am missing?

Since we're on the topic of MSIs... I'm currently trying to figure out how
we'll handle MSIs in the nested translation mode, where the guest manages
S1 page tables and the host doesn't know about GVA->GPA translation.

I'm also wondering about the benefits of having SW-mapped MSIs in the
guest. It seems unavoidable for vSMMU since that's what a physical system
would do. But in a paravirtualized solution there doesn't seem to be any
compelling reason for having the guest map MSI doorbells. These addresses
are never accessed directly, they are only used for setting up IRQ routing
(at least on kvmtool). So here's what I'd like to have. Note that I
haven't investigated the feasibility in Qemu yet, I don't know how it
deals with MSIs.

(1) Guest uses the guest-physical MSI doorbell when setting up MSIs. For
ARM with GICv3 this would be GITS_TRANSLATER, for x86 it would be the
fixed MSI doorbell. This way the host wouldn't need to inspect IOMMU
mappings when handling writes to PCI MSI-X tables.

(2) In nested mode (with VFIO) on ARM, the pSMMU will still translate MSIs
via S1+S2. Therefore the host needs to map MSIs at stage-1, and I'd like
to use the (currently unused) TTB1 tables in that case. In addition, using
TTB1 would be useful for SVM, when endpoints write MSIs with PASIDs and we
don't want to map them in user address space.

This means that the host needs to use different doorbell addresses in
nested mode, since it would be unable to map at S1 the same IOVA as S2
(TTB1 manages negative addresses - 0xffff............, which are not
representable as GPAs.) It also requires to use 32-bit page tables for
endpoints that are not capable of using 64-bit MSI addresses.


Now (2) is entirely handled in the host kernel, so it's more a Linux
question. But does (1) seem acceptable for virtio-iommu in Qemu?

Thanks,
Jean



reply via email to

[Prev in Thread] Current Thread [Next in Thread]