qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH v4 3/3] VFIO Type1 IOMMU: Add support for me


From: Dong Jia
Subject: Re: [Qemu-devel] [RFC PATCH v4 3/3] VFIO Type1 IOMMU: Add support for mediated devices
Date: Fri, 3 Jun 2016 16:32:04 +0800

On Thu, 2 Jun 2016 00:56:47 -0700
Neo Jia <address@hidden> wrote:

> On Wed, Jun 01, 2016 at 04:40:19PM +0800, Dong Jia wrote:
> > On Wed, 25 May 2016 01:28:17 +0530
> > Kirti Wankhede <address@hidden> wrote:
> > 
> > > +
> > > +/*
> > > + * Pin a set of guest PFNs and return their associated host PFNs for API
> > > + * supported domain only.
> > > + * @vaddr [in]: array of guest PFNs
> > > + * @npage [in]: count of array elements
> > > + * @prot [in] : protection flags
> > > + * @pfn_base[out] : array of host PFNs
> > > + */
> > > +long vfio_pin_pages(void *iommu_data, dma_addr_t *vaddr, long npage,
> > > +            int prot, dma_addr_t *pfn_base)
> > > +{
> > > + struct vfio_iommu *iommu = iommu_data;
> > > + struct vfio_domain *domain = NULL;
> > > + int i = 0, ret = 0;
> > > + long retpage;
> > > + unsigned long remote_vaddr = 0;
> > > + dma_addr_t *pfn = pfn_base;
> > > + struct vfio_dma *dma;
> > > +
> > > + if (!iommu || !vaddr || !pfn_base)
> > > +         return -EINVAL;
> > > +
> > > + mutex_lock(&iommu->lock);
> > > +
> > > + if (!iommu->mediated_domain) {
> > > +         ret = -EINVAL;
> > > +         goto pin_done;
> > > + }
> > > +
> > > + domain = iommu->mediated_domain;
> > > +
> > > + for (i = 0; i < npage; i++) {
> > > +         struct vfio_pfn *p, *lpfn;
> > > +         unsigned long tpfn;
> > > +         dma_addr_t iova;
> > > +         long pg_cnt = 1;
> > > +
> > > +         iova = vaddr[i] << PAGE_SHIFT;
> > Dear Kirti:
> > 
> > Got one question for the vaddr-iova conversion here.
> > Is this a common rule that can be applied to all architectures?
> > AFAIK, this is wrong for the s390 case. Or I must miss something...
> 
> I need more details about the "wrong" part. 
> IIUC, you are thinking about the guest iommu case?
> 
Dear Neo:

Sorry for the mistake I made. When I saw 'vaddr', I intuitively thought
it is an user-space virtual address. Now I saw the comment which says it
is the "array of guest PFNs".

After I modify my patches according to the right usage of this
argument, they worked fine. :>

> Thanks,
> Neo
> 
> > 
> > If the answer to the above question is 'no', should we introduce a new
> > argument to pass in the iovas? Say 'dma_addr_t *iova'.
> > 
> > > +
> > > +         dma = vfio_find_dma(iommu, iova, 0 /*  size */);
> > > +         if (!dma) {
> > > +                 ret = -EINVAL;
> > > +                 goto pin_done;
> > > +         }
> > > +
> > > +         remote_vaddr = dma->vaddr + iova - dma->iova;
> > > +
> > > +         retpage = vfio_pin_pages_internal(domain, remote_vaddr,
> > > +                                           pg_cnt, prot, &tpfn);
> > > +         if (retpage <= 0) {
> > > +                 WARN_ON(!retpage);
> > > +                 ret = (int)retpage;
> > > +                 goto pin_done;
> > > +         }
> > > +
> > > +         pfn[i] = tpfn;
> > > +
> > > +         /* search if pfn exist */
> > > +         p = vfio_find_pfn(domain, tpfn);
> > > +         if (p) {
> > > +                 atomic_inc(&p->ref_count);
> > > +                 continue;
> > > +         }
> > > +
> > > +         /* add to pfn_list */
> > > +         lpfn = kzalloc(sizeof(*lpfn), GFP_KERNEL);
> > > +         if (!lpfn) {
> > > +                 ret = -ENOMEM;
> > > +                 goto pin_done;
> > > +         }
> > > +         lpfn->vaddr = remote_vaddr;
> > > +         lpfn->iova = iova;
> > > +         lpfn->pfn = pfn[i];
> > > +         lpfn->npage = 1;
> > > +         lpfn->prot = prot;
> > > +         atomic_inc(&lpfn->ref_count);
> > > +         vfio_link_pfn(domain, lpfn);
> > > + }
> > > +
> > > + ret = i;
> > > +
> > > +pin_done:
> > > + mutex_unlock(&iommu->lock);
> > > + return ret;
> > > +}
> > > +EXPORT_SYMBOL(vfio_pin_pages);
> > 
> > 
> > --------
> > Dong Jia
> > 
> 



--------
Dong Jia




reply via email to

[Prev in Thread] Current Thread [Next in Thread]