qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH 0/3] Balloon inhibit enhancements


From: Peter Xu
Subject: Re: [Qemu-devel] [RFC PATCH 0/3] Balloon inhibit enhancements
Date: Wed, 18 Jul 2018 14:48:03 +0800
User-agent: Mutt/1.10.0 (2018-05-17)

On Tue, Jul 17, 2018 at 04:47:31PM -0600, Alex Williamson wrote:
> Directly assigned vfio devices have never been compatible with
> ballooning.  Zapping MADV_DONTNEED pages happens completely
> independent of vfio page pinning and IOMMU mapping, leaving us with
> inconsistent GPA to HPA mapping between vCPUs and assigned devices
> when the balloon deflates.  Mediated devices can theoretically do
> better, if we make the assumption that the mdev vendor driver is fully
> synchronized to the actual working set of the guest driver.  In that
> case the guest balloon driver should never be able to allocate an mdev
> pinned page for balloon inflation.  Unfortunately, QEMU can't know the
> workings of the vendor driver pinning, and doesn't actually know the
> difference between mdev devices and directly assigned devices.  Until
> we can sort out how the vfio IOMMU backend can tell us if ballooning
> is safe, the best approach is to disabling ballooning any time a vfio
> devices is attached.
> 
> To do that, simply make the balloon inhibitor a counter rather than a
> boolean, fixup a case where KVM can then simply use the inhibit
> interface, and inhibit ballooning any time a vfio device is attached.
> I'm expecting we'll expose some sort of flag similar to
> KVM_CAP_SYNC_MMU from the vfio IOMMU for cases where we can resolve
> this.  An addition we could consider here would be yet another device
> option for vfio, such as x-disable-balloon-inhibit, in case there are
> mdev devices that behave in a manner compatible with ballooning.
> 
> Please let me know if this looks like a good idea.  Thanks,

IMHO patches 1-2 are good cleanup as standalone patches...

I totally have no idea on whether people would like to use vfio-pci
and the balloon device at the same time.  After all vfio-pci are
majorly for performance players, then I would vaguely guess that they
don't really care thin provisioning of memory at all, hence the usage
scenario might not exist much.  Is that the major reason that we'd
just like to disable it (which makes sense to me)?

I'm wondering what if want to do that somehow some day... Whether
it'll work if we just let vfio-pci devices to register some guest
memory invalidation hook (just like the iommu notifiers, but for guest
memory address space instead), then we map/unmap the IOMMU pages there
for vfio-pci device to make sure the inflated balloon pages are not
mapped and also make sure new pages are remapped with correct HPA
after deflated.  This is a pure question out of my curiosity, and for
sure it makes little sense if the answer of the first question above
is positive.

Thanks,

-- 
Peter Xu



reply via email to

[Prev in Thread] Current Thread [Next in Thread]