qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 3/3] VFIO Type1 IOMMU: Add support for mediated


From: Tian, Kevin
Subject: Re: [Qemu-devel] [PATCH 3/3] VFIO Type1 IOMMU: Add support for mediated devices
Date: Thu, 30 Jun 2016 08:28:19 +0000

> From: Alex Williamson [mailto:address@hidden
> Sent: Wednesday, June 29, 2016 10:46 AM
> 
> On Tue, 28 Jun 2016 18:32:44 +0530
> Kirti Wankhede <address@hidden> wrote:
> 
> > On 6/22/2016 9:16 AM, Alex Williamson wrote:
> > > On Mon, 20 Jun 2016 22:01:48 +0530
> > > Kirti Wankhede <address@hidden> wrote:
> > >
> > >>
> > >>  struct vfio_iommu {
> > >>          struct list_head        domain_list;
> > >> +        struct vfio_domain      *mediated_domain;
> > >
> > > I'm not really a fan of how this is so often used to special case the
> > > code...

Better remove this field and treat mediated_domain same as other domains
(within vfio_domain you already have additional fields to mark mediated
fact)

> 
> 
> > >> +        struct mdev_device      *mdev;
> > >
> > > This gets set on attach_group where we use the iommu_group to lookup
> > > the mdev, so why can't we do that on the other paths that make use of
> > > this?  I think this is just holding a reference.
> > >
> >
> > mdev is retrieved from attach_group for 2 reasons:
> > 1. to increase the ref count of mdev, mdev_get_device_by_group(), when
> > its iommu_group is attached. That should be decremented, by
> > mdev_put_device(), from detach while detaching its iommu_group. This is
> > make sure that mdev is not freed until it's iommu_group is detached from
> > the container.
> >
> > 2. save reference to iommu_data so that vendor driver would use to call
> > vfio_pin_pages() and vfio_unpin_pages(). More details below.

Can't we retrieve mdev_device from iommu_group once the mdev is attached
to a iommu_group?

> > >> @@ -569,6 +819,7 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu,
> > >>          uint64_t mask;
> > >>          struct vfio_dma *dma;
> > >>          unsigned long pfn;
> > >> +        struct vfio_domain *domain = NULL;
> > >>
> > >>          /* Verify that none of our __u64 fields overflow */
> > >>          if (map->size != size || map->vaddr != vaddr || map->iova != 
> > >> iova)
> > >> @@ -611,10 +862,21 @@ static int vfio_dma_do_map(struct vfio_iommu 
> > >> *iommu,
> > >>          /* Insert zero-sized and grow as we map chunks of it */
> > >>          vfio_link_dma(iommu, dma);
> > >>
> > >> +        /*
> > >> +         * Skip pin and map if and domain list is empty
> > >> +         */
> > >> +        if (list_empty(&iommu->domain_list)) {
> > >> +                dma->size = size;
> > >> +                goto map_done;
> > >> +        }
> > >
> > > Again, this would be a serious consistency error for the existing use
> > > case.  Let's use indicators that are explicit.
> > >
> >
> > Why? for existing use case (i.e. direct device assignment) domain_list
> > will not be empty, domain_list will only be empty when there is mediated
> > device assigned and no direct device assigned.
> 
> I'm trying to be cautious whether it actually makes sense to
> dual-purpose the existing type1 iommu backend.  What's the benefit of
> putting an exit path in the middle of a function versus splitting it in
> two separate functions with two separate callers, one of which only
> calls the first function.  What's the benefit of a struct vfio_iommu
> that hosts both mediated and directly assigned devices?  Is the
> benefit to the functionality or to the code base?  Should the fact that
> they use the same iommu API dictate that they're also managed in the
> same data structures?  When we have too many special case exits or
> branches, the code complexity increases, bugs are harder to flush out,
> and possible exploits are more likely.  Convince me that this is the
> right approach.

If we have mediated_domain as a normal vfio_domain added to domain_list
of vfio_iommu, no such special case would be there then.

> 
> > > Should vfio_pin_pages instead have a struct
> > > device* parameter from which we would lookup the iommu_group and get to
> > > the vfio_domain?  That's a bit heavy weight, but we need something
> > > along those lines.
> > >
> >
> > There could be multiple mdev devices from same mediated vendor driver in
> > one container. In that case, that vendor driver need reference of
> > container or container->iommu_data to pin and unpin pages.
> > Similarly, there could be mutiple mdev devices from different mediated
> > vendor driver in one container, in that case both vendor driver need
> > reference to container or container->iommu_data to pin and unpin pages
> > in their driver.
> 
> As above, I think something like this is necessary, the proposed
> interface here is a bit of a side-channel hack.

Page-pinning should be counted per container, regardless of whether the
pinning request is for assigned devices or mediated devices. Then double
counting should be avoided as Alex pointed out.

Thanks
Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]