qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH RFC 0/4] intel_iommu: Do sanity check of vfio-pc


From: Auger Eric
Subject: Re: [Qemu-devel] [PATCH RFC 0/4] intel_iommu: Do sanity check of vfio-pci earlier
Date: Wed, 28 Aug 2019 14:59:45 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.4.0

Hi Peter,

On 8/13/19 10:41 AM, Jason Wang wrote:
> 
> On 2019/8/12 下午3:45, Peter Xu wrote:
>> This is a RFC series.
>>
>> The VT-d code has some defects, one of them is that we cannot detect
>> the misuse of vIOMMU and vfio-pci early enough.
>>
>> For example, logically this is not allowed:
>>
>>    -device intel-iommu,caching-mode=off \
>>    -device vfio-pci,host=05:00.0
>>
>> Because the caching mode is required to make vfio-pci devices
>> functional.
>>
>> Previously we did this sanity check in vtd_iommu_notify_flag_changed()
>> as when the memory regions change their attributes.  However that's
>> too late in most cases!  Because the memory region layouts will only
>> change after IOMMU is enabled, and that's in most cases during the
>> guest OS boots.  So when the configuration is wrong, we will only bail
>> out during the guest boots rather than simply telling the user before
>> QEMU starts.
>>
>> The same problem happens on device hotplug, say, when we have this:
>>
>>    -device intel-iommu,caching-mode=off
>>
>> Then we do something like:
>>
>>    (HMP) device_add vfio-pci,host=05:00.0,bus=pcie.1
>>
>> If at that time the vIOMMU is enabled in the guest then the QEMU
>> process will simply quit directly due to this hotplug event.  This is
>> a bit insane...
>>
>> This series tries to solve above two problems by introducing two
>> sanity checks upon these places separately:
>>
>>    - machine done
>>    - hotplug device
>>
>> This is a bit awkward but I hope this could be better than before.
>> There is of course other solutions like hard-code the check into
>> vfio-pci but I feel it even more unpretty.  I didn't think out any
>> better way to do this, if there is please kindly shout out.
>>
>> Please have a look to see whether this would be acceptable, thanks.
>>
>> Peter Xu (4):
>>    intel_iommu: Sanity check vfio-pci config on machine init done
>>    qdev/machine: Introduce hotplug_allowed hook
>>    pc/q35: Disallow vfio-pci hotplug without VT-d caching mode
>>    intel_iommu: Remove the caching-mode check during flag change
>>
>>   hw/core/qdev.c         | 17 +++++++++++++++++
>>   hw/i386/intel_iommu.c  | 40 ++++++++++++++++++++++++++++++++++------
>>   hw/i386/pc.c           | 21 +++++++++++++++++++++
>>   include/hw/boards.h    |  9 +++++++++
>>   include/hw/qdev-core.h |  1 +
>>   qdev-monitor.c         |  7 +++++++
>>   6 files changed, 89 insertions(+), 6 deletions(-)
>>
> 
> Do we need a generic solution other than an Intel specific one?

In
[PATCH v4 2/5] memory: Add IOMMU_ATTR_HW_NESTED_PAGING IOMMU memory
region attribute (https://patchwork.kernel.org/patch/11109701/)

[PATCH v4 3/5] hw/vfio/common: Fail on VFIO/HW nested paging detection
(https://patchwork.kernel.org/patch/11109697/)

I proposed to introduce a new IOMMU MR attribute to retrieve whether the
vIOMMU uses HW nested paging to integrate with VFIO. I wonder whether
this kind of solution would fit your need too.

Assuming we would rename the attribute (whose name is challenged by
Peter anyway) into something like IOMMU_ATTR_PHYS_MAP_MODE
taking the possible values: NONE, CM, HW_NESTED_PAGING. SMMUv3 would
return HW_NESTED_PAGING, Intel IOMMU would return CM if CM is enabled or
NONE in the negative. Then we could implement the check directly in VFIO
common.c. That way I don't think you would need the new notifiers and
this would satisfy both requirements?

Thoughts?

Thanks

Eric

> 
> Thanks
> 
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]