qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH RFC 0/3] vfio: allow to notify unmap for very bi


From: Alex Williamson
Subject: Re: [Qemu-devel] [PATCH RFC 0/3] vfio: allow to notify unmap for very big region
Date: Fri, 20 Jan 2017 10:14:01 -0700

On Fri, 20 Jan 2017 20:27:18 +0800
Peter Xu <address@hidden> wrote:

> On Fri, Jan 20, 2017 at 11:43:28AM +0800, Peter Xu wrote:
> 
> [...]
> 
> > > What I don't want to see is for this API bug to leak out into the rest
> > > of the QEMU code such that intel_iommu code, or iommu code in general
> > > subtly avoids it by artificially using a smaller range.  VT-d hardware
> > > has an actual physical address space of either 2^39 or 2^48 bits, so if
> > > you want to make the iommu address space match the device we're trying
> > > to emulate, that's perfectly fine.  AIUI, AMD-Vi does actually have a
> > > 64-bit address space on the IOMMU, so to handle that case I'd expect
> > > the simplest solution would be to track the and mapped iova high water
> > > mark per container in vfio and truncate unmaps to that high water end
> > > address.  Realistically we're probably not going to see iovas at the end
> > > of the 64-bit address space, but we can come up with some other
> > > workaround in the vfio code or update the kernel API if we do.  Thanks,  
> > 
> > Agree that high watermark can be a good solution for VT-d. I'll use
> > that instead of 2^63-1.  
> 
> Okay when I replied I didn't notice this "watermark" may need more
> than several (even tens of) LOCs. :(
> 
> Considering that I see no further usage of this watermark, I'm
> thinking whether it's okay I directly use (1ULL << VTD_MGAW) here as
> the watermark - it's simple, efficient and secure imho.

Avoiding the issue based on the virtual iommu hardware properties is a
fine solution, my intention was only to discourage introduction of
artificial limitations in the surrounding code to avoid this vfio
issue.  Thanks,

Alex



reply via email to

[Prev in Thread] Current Thread [Next in Thread]