qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH PROTOTYPE 3/6] vfio: Implement support for sparse RAM memory


From: David Hildenbrand
Subject: Re: [PATCH PROTOTYPE 3/6] vfio: Implement support for sparse RAM memory regions
Date: Wed, 18 Nov 2020 17:14:22 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.6.0

On 18.11.20 16:23, Peter Xu wrote:
David,

On Wed, Nov 18, 2020 at 02:04:00PM +0100, David Hildenbrand wrote:
On 20.10.20 22:44, Peter Xu wrote:
On Tue, Oct 20, 2020 at 10:01:12PM +0200, David Hildenbrand wrote:
Thanks ... but I have an AMD system. Will try to find out how to get
that running with AMD :)

May still start with trying intel-iommu first. :) I think it should work for
amd hosts too.

Just another FYI - Wei is working on amd-iommu for vfio [1], but it's still
during review.

[1] 
https://lore.kernel.org/qemu-devel/20201002145907.1294353-1-wei.huang2@amd.com/


I'm trying to get an iommu setup running (without virtio-mem!),
but it's a big mess.

Essential parts of my QEMU cmdline are:

sudo build/qemu-system-x86_64 \
     -accel kvm,kernel-irqchip=split \
     ...
      device pcie-pci-bridge,addr=1e.0,id=pci.1 \
     -device vfio-pci,host=0c:00.0,x-vga=on,bus=pci.1,addr=1.0,multifunction=on 
\
     -device vfio-pci,host=0c:00.1,bus=pci.1,addr=1.1 \
     -device intel-iommu,caching-mode=on,intremap=on \

The intel-iommu device needs to be created before the rest of devices.  I
forgot the reason behind, should be related to how the device address spaces
are created.  This rule should apply to all the rest of vIOMMUs, afaiu.

Libvirt guarantees that ordering when VT-d enabled, though when using qemu
cmdline indeed that's hard to identify from the first glance... iirc we tried
to fix this, but I forgot the details, it's just not trivial.

I noticed that this ordering constraint is also missing in the qemu wiki page
of vt-d, so I updated there too, hopefully..

https://wiki.qemu.org/Features/VT-d#Command_Line_Example


That did the trick! Thanks!!!

virtio-mem + vfio + iommu seems to work. More testing to be done.

However, malicious guests can play nasty tricks like

a) Unplugging plugged virtio-mem blocks while they are mapped via an
   IOMMU

1. Guest: map memory location X located on a virtio-mem device inside a
   plugged block into the IOMMU
   -> QEMU IOMMU notifier: create vfio DMA mapping
   -> VFIO pins memory of unplugged blocks (populating memory)
2. Guest: Request to unplug memory location X via virtio-mem device
   -> QEMU virtio-mem: discards the memory.
   -> VFIO still has the memory pinned

We consume more memory than intended. In case virtio-memory would get replugged and used, we would have an inconsistency. IOMMU device resets/ fix it (whereby all VFIO mappings are removed via the IOMMU notifier).


b) Mapping unplugged virtio-mem blocks via an IOMMU

1. Guest: map memory location X located on a virtio-mem device inside an
   unplugged block
   -> QEMU IOMMU notifier: create vfio DMA mapping
   -> VFIO pins memory of unplugged blocks (populating memory)

Memory that's supposed to be discarded now consumes memory. This is similar to a malicious guest simply writing to unplugged memory blocks (to be tackled with "protection of unplugged memory" in the future) - however memory will also get pinned.


To prohibit b) from happening, we would have to disallow creating the VFIO mapping (fairly easy).

To prohibit a), there would have to be some notification to IOMMU implementations to unmap/refresh whenever an IOMMU entry still points at memory that is getting discarded (and the VM is doing something it's not supposed to do).


As soon as I enable "intel_iommu=on" in my guest kernel, graphics
stop working (random mess on graphics output) and I get
   vfio-pci 0000:0c:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0023 
address=0xff924000 flags=0x0000]
in the hypervisor, along with other nice messages.

I can spot no vfio DMA mappings coming from an iommu, just as if the
guest wouldn't even try to setup the iommu.

I tried with
1. AMD Radeon RX Vega 56
2. Nvidia GT220
resulting in similar issues.

I also tried with "-device amd-iommu" with other issues
(guest won't even boot up). Are my graphics card missing some support or
is there a fundamental flaw in my setup?

I guess amd-iommu won't work if without Wei Huang's series applied.

Oh, okay - I spotted it in QEMU and thought this was already working :)

--
Thanks,

David / dhildenb




reply via email to

[Prev in Thread] Current Thread [Next in Thread]