qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Supporting emulation of IOMMUs


From: David Gibson
Subject: Re: [Qemu-devel] Supporting emulation of IOMMUs
Date: Tue, 10 May 2011 11:44:26 +1000
User-agent: Mutt/1.5.21 (2010-09-15)

On Thu, Apr 21, 2011 at 09:47:31PM +0300, Eduard - Gabriel Munteanu wrote:
> On Thu, Apr 21, 2011 at 05:03:47PM +1000, David Gibson wrote:
> > A few months ago, Eduard - Gabriel Munteanu posted a series of patches
> > implementing support for emulating the AMD PCI IOMMU
> > (http://lists.nongnu.org/archive/html/qemu-devel/2011-01/msg03196.html).
> > 
> > In fact, this series implemented a general DMA/IOMMU layer which can
> > be used by any device model, and one translation backend for this
> > implementing the AMD specific PCI IOMMU.
> > 
> > These patches don't seem to have gone anywhere for the last few
> > months, however, and so far I've been unable to contact the author
> > (trying again with this mail).
> > 
> > I have an interest in this code, because the pSeries machine will also
> > need IOMMU emulation support.  At present we only support virtual
> > devices, through the PAPR interface, and we have support for the
> > hypervisor-controller IOMMU translation in the PAPR VIO code.
> > However, we want to add PCI device support and this will also need
> > IOMMU translation.
> > 
> > The series seems to have the right basic approach, so if the author is
> > indeed MIA, I was planning to pick up the patches and resubmit them
> > (with support for the pSeries IOMMU added).
> 
> Hi,
> 
> Not really MIA, but I've been a bit busy lately, so I'm sorry if I
> couldn't answer your mail in a timely fashion.
> 
> I'll try making another merge attempt tonight/tomorrow.

Ok.  Did this happen?  Sorry, I've been away the last couple of weeks
- I had a google at the qemu-devel archives but couldn't spot a new
merge, but did I just not look hard enough?

> > Before I do that, I was hoping to get some consensus that this is the
> > right way to go.  For reference, I have an updated version of the
> > first patch (which adds the core IOMMU layer) below.

I think the base DMA layer is the correct approach.  There are some
problems with the handling in PCI - as someone else pointed out the
fact that it assumes the IOMMU is itself a PCI device is problematic
for non-x86 platforms.

> Some developers expressed a few concerns during my last merge attempt,
> I'm going to go through them and see if they have been solved.

Ok.

[snip]
> >  * the dma_memory_map() tracking was storing the guest physical
> >    address of each mapping, but not the qemu user virtual address.
> >    However in unmap() it was then attempting to lookup by virtual
> >    using a completely bogus cast.
> 
> Thanks. Map invalidation didn't get much testing, maybe figuring out a
> way to trigger it from a guest would be nice, say a testcase.

Well, I didn't catch a logic problem - for me this bug caused compile
failure.

> >  * The dma_memory_rw() function is moved from dma_rw.h to dma_rw.c, it
> >    was a bit too much code for an inline.
> > 
> >  * IOMMU support is now available on all target platforms, not just
> >    i386, but is configurable (--enable-iommu/--disable-iommu).  Stubs
> >    are used so that individual drivers can use the new dma interface
> >    and it will turn into old-style cpu physical accesses at no cost on
> >    IOMMU-less builds.
> 
> My impression was people were in favor of having the IOMMU code always
> built in (and go through the direct cpu_physical_* when not configured
> by the guest). And perhaps poison the old interfaces once everything
> goes through the new DMA layer. I'm okay any way, though.

Oh, I had the opposite impression.  I don't care either way,
personally.

-- 
David Gibson                    | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
                                | _way_ _around_!
http://www.ozlabs.org/~dgibson



reply via email to

[Prev in Thread] Current Thread [Next in Thread]