qemu-arm
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-arm] [Qemu-devel] [RFC v2 0/8] VIRTIO-IOMMU device


From: Tian, Kevin
Subject: Re: [Qemu-arm] [Qemu-devel] [RFC v2 0/8] VIRTIO-IOMMU device
Date: Wed, 5 Jul 2017 07:25:43 +0000

> From: Jean-Philippe Brucker [mailto:address@hidden
> Sent: Tuesday, June 27, 2017 12:13 AM
> 
> On 26/06/17 09:22, Auger Eric wrote:
> > Hi Jean-Philippe,
> >
> > On 19/06/2017 12:15, Jean-Philippe Brucker wrote:
> >> On 19/06/17 08:54, Bharat Bhushan wrote:
> >>> Hi Eric,
> >>>
> >>> I started added replay in virtio-iommu and came across how MSI
> interrupts with work with VFIO.
> >>> I understand that on intel this works differently but vsmmu will have
> same requirement.
> >>> kvm-msi-irq-route are added using the msi-address to be translated by
> viommu and not the final translated address.
> >>> While currently the irqfd framework does not know about emulated
> iommus (virtio-iommu, vsmmuv3/vintel-iommu).
> >>> So in my view we have following options:
> >>> - Programming with translated address when setting up kvm-msi-irq-
> route
> >>> - Route the interrupts via QEMU, which is bad from performance
> >>> - vhost-virtio-iommu may solve the problem in long term
> >>>
> >>> Is there any other better option I am missing?
> >>
> >> Since we're on the topic of MSIs... I'm currently trying to figure out how
> >> we'll handle MSIs in the nested translation mode, where the guest
> manages
> >> S1 page tables and the host doesn't know about GVA->GPA translation.
> >
> > I have a question about the "nested translation mode" terminology. Do
> > you mean in that case you use stage 1 + stage 2 of the physical IOMMU
> > (which the ARM spec normally advises or was meant for) or do you mean
> > stage 1 implemented in vIOMMU and stage 2 implemented in pIOMMU. At
> the
> > moment my understanding is for VFIO integration the pIOMMU uses a
> single
> > stage combining both the stage 1 and stage2 mappings but the host is not
> > aware of those 2 stages.
> 
> Yes at the moment the VMM merges stage-1 (GVA->GPA) from the guest with
> its stage-2 mappings (GPA->HPA) and creates a stage-2 mapping (GVA->HPA)
> in the pIOMMU via VFIO_IOMMU_MAP_DMA. stage-1 is disabled in the
> pIOMMU.
> 

Curious whether you are describing current smmu status or general
vIOMMU status also applying to other vendors...

the usage what you described is about svm, while svm requires PASID.
At least PASID is tied to stage-1 on Intel VT-d. Only DMA w/o PASID
or nested translation from stage-1 will go through stage-2. Unless
ARM smmu has a completely different implementation, I'm not sure
how svm can be virtualized w/ stage-1 translation disabled. There
are multiple stage-1 page tables while only one stage-2 page table per
device. Could merging actually work here?

The only case with merging happen today is for guest stage-2 usage
or so-called GIOVA usage. Guest programs GIOVA->GPA to vIOMMU 
stage-2. Then vIOMMU invokes vfio map/unmap APIs to translate/
merge to GIOVA->HPA to pIOMMU stage-2. Maybe what you
actually meant is this one?

Thanks
Kevin

reply via email to

[Prev in Thread] Current Thread [Next in Thread]