qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v9 04/12] vfio iommu: Add support for mediated d


From: Tian, Kevin
Subject: Re: [Qemu-devel] [PATCH v9 04/12] vfio iommu: Add support for mediated devices
Date: Wed, 26 Oct 2016 07:54:56 +0000

> From: Tian, Kevin
> Sent: Wednesday, October 26, 2016 3:54 PM
> 
> > From: Alex Williamson [mailto:address@hidden
> > Sent: Thursday, October 20, 2016 5:03 AM
> > > @@ -83,6 +92,21 @@ struct vfio_group {
> > >  };
> > >
> > >  /*
> > > + * Guest RAM pinning working set or DMA target
> > > + */
> > > +struct vfio_pfn {
> > > + struct rb_node          node;
> > > + unsigned long           vaddr;          /* virtual addr */
> > > + dma_addr_t              iova;           /* IOVA */
> > > + unsigned long           pfn;            /* Host pfn */
> > > + int                     prot;
> > > + atomic_t                ref_count;
> > > +};
> >
> > Somehow we're going to need to fit an invalidation callback here too.
> > How would we handle a case where there are multiple mdev devices, from
> > different vendor drivers, that all have the same pfn pinned?  I'm
> > already concerned about the per pfn overhead we're introducing here so
> > clearly we cannot store an invalidation callback per pinned page, per
> > vendor driver.  Perhaps invalidations should be done using a notifier
> > chain per vfio_iommu, the vendor drivers are required to register on
> > that chain (fail pinning with empty notifier list) user unmapping
> > will be broadcast to the notifier chain, the vendor driver will be
> > responsible for deciding if each unmap is relevant to them (potentially
> > it's for a pinning from another driver).
> >
> > I expect we also need to enforce that vendors perform a synchronous
> > unmap such that after returning from the notifier list call, the
> > vfio_pfn should no longer exist.  If it does we might need to BUG_ON.
> > Also be careful to pay attention to the locking of the notifier vs
> > unpin callbacks to avoid deadlocks.
> >
> 
> What about just requesting vendor driver to provide a callback in parent
> device ops?
> 
> Curious in which scenario the user application (say Qemu here) may
> unmap memory pages which are still pinned by vendor driver... Is it
> purely about a corner case which we want to handle elegantly?
> 
> If yes, possibly a simpler way is to force destroying mdev instead of
> asking vendor driver to take care of each invalidation request under
> such situation. Since anyway the mdev device won't be in an usable
> state anymore... (sorry if I missed the key problem here.)
> 

or calling reset callback of parent device driver, if we don't want to
break libvirt's expectation by blindly removing mdev device...



reply via email to

[Prev in Thread] Current Thread [Next in Thread]