qemu-stable
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH V2] vhost: correctly turn on VIRTIO_F_IOMMU_PLATFORM


From: Michael S. Tsirkin
Subject: Re: [PATCH V2] vhost: correctly turn on VIRTIO_F_IOMMU_PLATFORM
Date: Tue, 17 Mar 2020 02:28:42 -0400

On Mon, Mar 16, 2020 at 02:14:05PM -0400, Peter Xu wrote:
> On Mon, Mar 16, 2020 at 01:19:54PM -0400, Michael S. Tsirkin wrote:
> > On Fri, Mar 13, 2020 at 12:31:22PM -0400, Peter Xu wrote:
> > > On Fri, Mar 13, 2020 at 11:29:59AM -0400, Michael S. Tsirkin wrote:
> > > > On Fri, Mar 13, 2020 at 01:44:46PM +0100, Halil Pasic wrote:
> > > > > [..]
> > > > > > > 
> > > > > > > CCing Tom. @Tom does vhost-vsock work for you with SEV and 
> > > > > > > current qemu?
> > > > > > > 
> > > > > > > Also, one can specify iommu_platform=on on a device that ain't a 
> > > > > > > part of
> > > > > > > a secure-capable VM, just for the fun of it. And that breaks
> > > > > > > vhost-vsock. Or is setting iommu_platform=on only valid if
> > > > > > > qemu-system-s390x is protected virtualization capable?
> > > > > > > 
> > > > > > > BTW, I don't have a strong opinion on the fixes tag. We currently 
> > > > > > > do not
> > > > > > > recommend setting iommu_platform, and thus I don't think we care 
> > > > > > > too
> > > > > > > much about past qemus having problems with it.
> > > > > > > 
> > > > > > > Regards,
> > > > > > > Halil
> > > > > > 
> > > > > > 
> > > > > > Let's just say if we do have a Fixes: tag we want to set it 
> > > > > > correctly to
> > > > > > the commit that needs this fix.
> > > > > > 
> > > > > 
> > > > > I finally did some digging regarding the performance degradation. For
> > > > > s390x the performance degradation on vhost-net was introduced by 
> > > > > commit
> > > > > 076a93d797 ("exec: simplify address_space_get_iotlb_entry"). Before
> > > > > IOMMUTLBEntry.addr_mask used to be based on plen, which in turn was
> > > > > calculated as the rest of the memory regions size (from address), and
> > > > > covered most of the guest address space. That is we didn't have a 
> > > > > whole
> > > > > lot of IOTLB API overhead.
> > > > > 
> > > > > With commit 076a93d797 I see IOMMUTLBEntry.addr_mask == 0xfff which 
> > > > > comes
> > > > > as ~TARGET_PAGE_MASK from flatview_do_translate(). To have things 
> > > > > working
> > > > > properly I applied 75e5b70e6, b021d1c044, and d542800d1e on the level 
> > > > > of
> > > > > 076a93d797 and 076a93d797~1.
> > > > 
> > > > Peter, what's your take on this one?
> > > 
> > > Commit 076a93d797 was one of the patchset where we want to provide
> > > sensible IOTLB entries and also that should start to work with huge
> > > pages.
> > 
> > So the issue bundamentally is that it
> > never produces entries larger than page size.
> > 
> > Wasteful even just with huge pages, all the more
> > so which passthrough which could have giga-byte
> > entries.
> > 
> > Want to try fixing that?
> 
> Yes we can fix that, but I'm still not sure whether changing the
> interface of address_space_get_iotlb_entry() to cover adhoc regions is
> a good idea, because I think it's still a memory core API and imho it
> would still be good to have IOTLBs returned to be what the hardware
> will be using (always page aligned IOTLBs).

E.g. with virtio-iommu, there's no hardware in sight.
Even with e.g. VTD page aligned does not mean TARGET_PAGE,
can be much bigger.

>  Also it would still be
> not ideal because vhost backend will still need to send the MISSING
> messages and block for each of the continuous guest memory ranges
> registered, so there will still be misterious delay.  Not to say
> logically all the caches can be invalidated too so in that sense I
> think it's as hacky as the vhost speedup patch mentioned below..
> 
> Ideally I think vhost should be able to know when PT is enabled or
> disabled for the device, so the vhost backend (kernel or userspace)
> should be able to directly use GPA for DMA.  That might need some new
> vhost interface.
> 
> For the s390's specific issue, I would think Jason's patch an simple
> and ideal solution already.
> 
> Thanks,
> 
> > 
> > 
> > >  Frankly speaking after a few years I forgot the original
> > > motivation of that whole thing, but IIRC there's a patch that was
> > > trying to speedup especially for vhost but I noticed it's not merged:
> > > 
> > > https://lists.gnu.org/archive/html/qemu-devel/2017-06/msg00574.html
> > > 
> > > Regarding to the current patch, I'm not sure I understand it
> > > correctly, but is that performance issue only happens when (1) there's
> > > no intel-iommu device, and (2) there is iommu_platform=on specified
> > > for the vhost backend?
> > > 
> > > If so, I'd confess I am not too surprised if this fails the boot with
> > > vhost-vsock because after all we speicified iommu_platform=on
> > > explicitly in the cmdline, so if we want it to work we can simply
> > > remove that iommu_platform=on when vhost-vsock doesn't support it
> > > yet...  I thougth iommu_platform=on was added for that case - when we
> > > want to force IOMMU to be enabled from host side, and it should always
> > > be used with a vIOMMU device.
> > > 
> > > However I also agree that from performance POV this patch helps for
> > > this quite special case.
> > > 
> > > Thanks,
> > > 
> > > -- 
> > > Peter Xu
> > 
> 
> -- 
> Peter Xu




reply via email to

[Prev in Thread] Current Thread [Next in Thread]