qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] make vfio and DAX cache work together


From: Dr. David Alan Gilbert
Subject: Re: [PATCH] make vfio and DAX cache work together
Date: Tue, 4 May 2021 10:58:15 +0100
User-agent: Mutt/2.0.6 (2021-03-06)

* Dev Audsin (dev.devaqemu@gmail.com) wrote:
> Thanks David. I did a quick test with the above patch and it seems to
> work for me. With this patch, apparently I can  create a VM with
> SR-IOV VF and DAX cache ( virtio_fs_cache_size = 1024).

Great! I'll put it in the next set of DAX patches I send.

Dave

> Thanks
> Dev
> 
> On Thu, Apr 29, 2021 at 6:55 PM Dr. David Alan Gilbert
> <dgilbert@redhat.com> wrote:
> >
> > * Alex Williamson (alex.williamson@redhat.com) wrote:
> > > On Thu, 29 Apr 2021 09:44:51 +0100
> > > "Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> > >
> > > > * Alex Williamson (alex.williamson@redhat.com) wrote:
> > > > > On Wed, 28 Apr 2021 20:17:23 +0100
> > > > > "Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> > > > >
> > > > > > * Dev Audsin (dev.devaqemu@gmail.com) wrote:
> > > > > > > Thanks Dave for your explanation.
> > > > > > > Any suggestions on how to make VFIO not attempt to map into the
> > > > > > > unaccessible and unallocated RAM.
> > > > > >
> > > > > > I'm not sure;:
> > > > > >
> > > > > > static bool vfio_listener_skipped_section(MemoryRegionSection 
> > > > > > *section)
> > > > > > {
> > > > > >     return (!memory_region_is_ram(section->mr) &&
> > > > > >             !memory_region_is_iommu(section->mr)) ||
> > > > > >            section->offset_within_address_space & (1ULL << 63);
> > > > > > }
> > > > > >
> > > > > > I'm declaring that region with memory_region_init_ram_ptr;  should 
> > > > > > I be?
> > > > > > it's not quite like RAM.
> > > > > > But then I *do* want a kvm slot for it, and I do want it to be 
> > > > > > accessed
> > > > > > by mapping rather htan calling IO functions; that makes me think 
> > > > > > mr->ram
> > > > > > has to be true.
> > > > > > But then do we need to add another flag to memory-regions; if we do,
> > > > > > what is it;
> > > > > >    a) We don't want an 'is_virtio_fs' - it needs to be more generic
> > > > > >    b) 'no_vfio' also feels wrong
> > > > > >
> > > > > > Is perhaps 'not_lockable' the right thing to call it?
> > > > >
> > > > > This reasoning just seems to lead back to "it doesn't work, therefore
> > > > > don't do it" rather than identifying the property of the region that
> > > > > makes it safe not to map it for device DMA (assuming that's actually
> > > > > the case).
> > > >
> > > > Yes, I'm struggling to get to what that generic form of that property
> > > > is, possibly because I've not got an example of another case to compare
> > > > it with.
> > > >
> > > > > It's clearly "RAM" as far as QEMU is concerned given how
> > > > > it's created, but does it actually appear in the VM as generic 
> > > > > physical
> > > > > RAM that the guest OS could program to the device as a DMA target?  If
> > > > > not, what property makes that so, create a flag for that.  Thanks,
> > > >
> > > > The guest sees it as a PCI-bar; so it knows it's not 'generic physical
> > > > RAM' - but can a guest set other BARs (like frame buffers or pmem) as
> > > > DMA targets?  If so, how do I distinguish our bar?
> > >
> > > They can, this is how peer-to-peer DMA between devices works.  However,
> > > we can perhaps take advantage that drivers are generally a bit more
> > > cautious in probing that this type of DMA works before relying on it,
> > > and declare it with memory_region_init_ram_device_ptr() which vfio
> > > would not consider fatal if it fails to map it.  The other semantic
> > > difference is that ram_device_mem_ops are used for read/write access to
> > > avoid some of the opcodes that are not meant to be used for physical
> > > device memory with the default memcpy ops.  If you expect this region
> > > to be mapped as a kvm memory slot, presumably these would never get
> > > used anyway.  Thanks,
> >
> > Oh, nice, I hadn't spotted memory_region_init_ram_device_ptr();
> >
> > diff --git a/hw/virtio/vhost-user-fs.c b/hw/virtio/vhost-user-fs.c
> > index 7afd9495c9..11fb9b5979 100644
> > --- a/hw/virtio/vhost-user-fs.c
> > +++ b/hw/virtio/vhost-user-fs.c
> > @@ -604,7 +604,7 @@ static void vuf_device_realize(DeviceState *dev, Error 
> > **errp)
> >              return;
> >          }
> >
> > -        memory_region_init_ram_ptr(&fs->cache, OBJECT(vdev),
> > +        memory_region_init_ram_device_ptr(&fs->cache, OBJECT(vdev),
> >                                     "virtio-fs-cache",
> >                                     fs->conf.cache_size, cache_ptr);
> >      }
> >
> > apparently still works for us; Dev does that fix it for you?
> >
> > Dave
> >
> > > Alex
> > --
> > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> >
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK




reply via email to

[Prev in Thread] Current Thread [Next in Thread]