qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] make vfio and DAX cache work together


From: Dr. David Alan Gilbert
Subject: Re: [PATCH] make vfio and DAX cache work together
Date: Tue, 27 Apr 2021 20:00:42 +0100
User-agent: Mutt/2.0.6 (2021-03-06)

* Alex Williamson (alex.williamson@redhat.com) wrote:
> On Tue, 27 Apr 2021 17:29:37 +0100
> Dev Audsin <dev.devaqemu@gmail.com> wrote:
> 
> > Hi Alex
> > 
> > Based on your comments and thinking a bit, wonder if it makes sense to
> > allow DMA map for the DAX cache but make unexpected mappings to be not
> > fatal. Please let me know your thoughts.
> 
> I think you're still working on the assumption that simply making the
> VM boot is an improvement, it's not.  If there's a risk that a possible
> DMA target for the device cannot be mapped, it's better that the VM
> fail to boot than to expose that risk.  Performance cannot compromise
> correctness.
> 
> We do allow DMA mappings to other device memory regions to fail
> non-fatally with the logic that peer-to-peer DMA is often not trusted
> to work by drivers and therefore support would be probed before
> assuming that it works.  I don't think that same logic applies here.
> 
> Is there something about the definition of this particular region that
> precludes it from being a DMA target for an assigned devices?

It's never really the ram that's used.
This area is really a chunk of VMA that's mmap'd over by (chunks of)
normal files in the underlying exported filesystem.  The actual RAM
block itself is just a placeholder for the VMA, and is normally mapped
PROT_NONE until an actual file is mapped on top of it.
That cache bar is a mapping containing multiple separate file chunk
mappings.

So I guess the problems for VFIO are:
  a) At the start it's unmapped, unaccessible, unallocated ram.
  b) Later it's arbitrary chunks of ondisk files.

[on a bad day, and it's bad even without vfio, someone truncates the
file mapping]

Dave

> Otherwise if it's initially unpopulated, maybe something like the
> RamDiscardManager could be used to insert DMA mappings as the region
> becomes populated.
> 
> Simply disabling mapping to boot with both features together, without
> analyzing how that missing mapping affects their interaction is not
> acceptable.  Thanks,
> 
> Alex
> 
> > On Mon, Apr 26, 2021 at 10:22 PM Alex Williamson
> > <alex.williamson@redhat.com> wrote:
> > >
> > > On Mon, 26 Apr 2021 21:50:38 +0100
> > > Dev Audsin <dev.devaqemu@gmail.com> wrote:
> > >  
> > > > Hi Alex and David
> > > >
> > > > @Alex:
> > > >
> > > > Justification on why this region cannot be a DMA target for the device,
> > > >
> > > > virtio-fs with DAX is currently not compatible with NIC Pass through.
> > > > When a SR-IOV VF attaches to a qemu process, vfio will try to pin the
> > > > entire DAX Window but it is empty when the guest boots and will fail.
> > > > A method to make VFIO and DAX to work together is to make vfio skip
> > > > DAX cache.
> > > >
> > > > Currently DAX cache need to be set to 0, for the SR-IOV VF to be
> > > > attached to Kata containers. Enabling both SR-IOV VF and DAX work
> > > > together will potentially improve performance for workloads which are
> > > > I/O and network intensive.  
> > >
> > > Sorry, there's no actual justification described here.  You're enabling
> > > a VM with both features, virtio-fs DAX and VFIO, but there's no
> > > evidence that they "work together" or that your use case is simply
> > > avoiding a scenario where the device might attempt to DMA into the area
> > > with this designation.  With this change, if the device were to attempt
> > > to DMA into this region, it would be blocked by the IOMMU, which might
> > > result in a data loss within the VM.  Justification of this change
> > > needs to prove that this region can never be a DMA target for the
> > > device, not simply that both features can be enabled and we hope that
> > > they don't interact.  Thanks,
> > >
> > > Alex
> > >  
> > 
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK




reply via email to

[Prev in Thread] Current Thread [Next in Thread]