qemu-arm
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v16 00/10] VIRTIO-IOMMU device


From: Jean-Philippe Brucker
Subject: Re: [PATCH v16 00/10] VIRTIO-IOMMU device
Date: Thu, 5 Mar 2020 08:34:17 +0100

On Thu, Mar 05, 2020 at 02:56:20AM +0000, Tian, Kevin wrote:
> > From: Jean-Philippe Brucker <address@hidden>
> > Sent: Thursday, March 5, 2020 12:47 AM
> >
> [...]
> > > >
> > > > * We can't use DVM in nested mode unless the VMID is shared with the
> > > > CPU. For that we'll need the host SMMU driver to hook into the KVM
> > VMID
> > > > allocator, just like we do for the ASID allocator. I haven't yet
> > > > investigated how to do that. It's possible to do vSVA without DVM
> > > > though, by sending all TLB invalidations through the SMMU command
> > queue.
> > > > "
> > 
> > Hm we're already mandating DVM for host SVA, so I'd say mandate it for
> > vSVA as well. We'd avoid a ton of context switches, especially for the zip
> > accelerator which doesn't require ATC invalidations. The host needs to pin
> > the VMID allocated by KVM and write it in the endpoint's STE.
> > 
> 
> Curious... what is DVM and how is it related to SVA? Is it SMMU specific?

Yes it stands for "Distributed Virtual Memory", an Arm interconnect
protocol. When sharing a process address space, TLB invalidations from the
CPU are broadcasted to the SMMU, so we don't have to send commands through
the SMMU queue to invalidate IOTLBs. However ATCs from PCIe endpoints do
not participate in DVM and still have to be invalidated by hand.

Thanks,
Jean



reply via email to

[Prev in Thread] Current Thread [Next in Thread]