[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v2 10/20] vfio/common: Record DMA mapped IOVA ranges
From: |
Joao Martins |
Subject: |
Re: [PATCH v2 10/20] vfio/common: Record DMA mapped IOVA ranges |
Date: |
Fri, 3 Mar 2023 23:57:03 +0000 |
On 03/03/2023 23:47, Alex Williamson wrote:
> On Fri, 3 Mar 2023 20:16:19 +0000
> Joao Martins <joao.m.martins@oracle.com> wrote:
>
>> On 03/03/2023 19:40, Alex Williamson wrote:
>>> On Fri, 3 Mar 2023 19:14:50 +0000
>>> Joao Martins <joao.m.martins@oracle.com> wrote:
>>>
>>>> On 03/03/2023 17:05, Alex Williamson wrote:
>>>>> On Fri, 3 Mar 2023 16:58:55 +0000
>>>>> Joao Martins <joao.m.martins@oracle.com> wrote:
>>>>>
>>>>>> On 03/03/2023 00:19, Joao Martins wrote:
>>>>>>> On 02/03/2023 18:42, Alex Williamson wrote:
>>>>>>>> On Thu, 2 Mar 2023 00:07:35 +0000
>>>>>>>> Joao Martins <joao.m.martins@oracle.com> wrote:
>>>>>>>>> @@ -426,6 +427,11 @@ void
>>>>>>>>> vfio_unblock_multiple_devices_migration(void)
>>>>>>>>> multiple_devices_migration_blocker = NULL;
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>> +static bool vfio_have_giommu(VFIOContainer *container)
>>>>>>>>> +{
>>>>>>>>> + return !QLIST_EMPTY(&container->giommu_list);
>>>>>>>>> +}
>>>>>>>>
>>>>>>>> I think it's the case, but can you confirm we build the giommu_list
>>>>>>>> regardless of whether the vIOMMU is actually enabled?
>>>>>>>>
>>>>>>> I think that is only non-empty when we have the first IOVA mappings
>>>>>>> e.g. on
>>>>>>> IOMMU passthrough mode *I think* it's empty. Let me confirm.
>>>>>>>
>>>>>> Yeap, it's empty.
>>>>>>
>>>>>>> Otherwise I'll have to find a TYPE_IOMMU_MEMORY_REGION object to
>>>>>>> determine if
>>>>>>> the VM was configured with a vIOMMU or not. That is to create the LM
>>>>>>> blocker.
>>>>>>>
>>>>>> I am trying this way, with something like this, but neither
>>>>>> x86_iommu_get_default() nor below is really working out yet. A little
>>>>>> afraid of
>>>>>> having to add the live migration blocker on each machine_init_done hook,
>>>>>> unless
>>>>>> t here's a more obvious way. vfio_realize should be at a much later
>>>>>> stage, so I
>>>>>> am surprised how an IOMMU object doesn't exist at that time.
>>>>>
>>>>> Can we just test whether the container address space is system_memory?
>>>>
>>>> IIUC, it doesn't work (see below snippet).
>>>>
>>>> The problem is that you start as a regular VFIO guest, and when the guest
>>>> boot
>>>> is when new mappings get established/invalidated and propagated into
>>>> listeners
>>>> (vfio_listener_region_add) and they morph into having a giommu. And that's
>>>> when
>>>> you can figure out in higher layers that 'you have a vIOMMU' as that's
>>>> when the
>>>> address space gets changed? That is without being specific to a particular
>>>> IOMMU
>>>> model. Maybe region_add is where to add, but then it then depends on the
>>>> guest.
>>>
>>> This doesn't seem right to me, look for instance at
>>> pci_device_iommu_address_space() which returns address_space_memory
>>> when there is no vIOMMU. If devices share an address space, they can
>>> share a container. When a vIOMMU is present (not even enabled), each
>>> device gets it's own container due to the fact that it's in its own
>>> address space (modulo devices within the same address space due to
>>> aliasing).
>>
>> You're obviously right, I was reading this whole thing wrong. This works as
>> far
>> as I tested with an iommu=pt guest (and without an vIOMMU).
>>
>> I am gonna shape this up, and hopefully submit v3 during over night.
>>
>> @@ -416,9 +416,26 @@ void vfio_unblock_multiple_devices_migration(void)
>> multiple_devices_migration_blocker = NULL;
>> }
>>
>> -static bool vfio_have_giommu(VFIOContainer *container)
>> +static VFIOAddressSpace *vfio_get_address_space(AddressSpace *as);
>> +
>> +int vfio_block_giommu_migration(VFIODevice *vbasedev, Error **errp)
>> {
>> - return !QLIST_EMPTY(&container->giommu_list);
>> + int ret;
>> +
>> + if (vbasedev->type == VFIO_DEVICE_TYPE_PCI &&
>> + !vfio_has_iommu(vbasedev)) {
>> + return 0;
>> + }
>> +
>> + error_setg(&giommu_migration_blocker,
>> + "Migration is currently not supported with vIOMMU enabled");
>> + ret = migrate_add_blocker(giommu_migration_blocker, errp);
>> + if (ret < 0) {
>> + error_free(giommu_migration_blocker);
>> + giommu_migration_blocker = NULL;
>> + }
>> +
>> + return ret;
>> }
>> diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
>> index 939dcc3d4a9e..f4cf0b41a157 100644
>> --- a/hw/vfio/pci.c
>> +++ b/hw/vfio/pci.c
>> @@ -2843,6 +2843,15 @@ static void
>> vfio_unregister_req_notifier(VFIOPCIDevice *vdev)
>> vdev->req_enabled = false;
>> }
>>
>> +bool vfio_has_iommu(VFIODevice *vbasedev)
>> +{
>> + VFIOPCIDevice *vdev = container_of(vbasedev, VFIOPCIDevice, vbasedev);
>> + PCIDevice *pdev = &vdev->pdev;
>> + AddressSpace *as = &address_space_memory;
>> +
>> + return !(pci_device_iommu_address_space(pdev) == as);
>> +}
>
>
> Shouldn't this be something non-PCI specific like:
>
> return vbasedev->group->container->space != &address_space_memory;
>
Yes, much better, I've applied the following (partial diff below):
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index 6cd0100bbe09..60af3c3018dc 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -421,8 +421,7 @@ int vfio_block_giommu_migration(VFIODevice *vbasedev, Error
**errp)
{
int ret;
- if (vbasedev->type == VFIO_DEVICE_TYPE_PCI &&
- !vfio_has_iommu(vbasedev)) {
+ if (vbasedev->group->container->space->as == &address_space_memory) {
return 0;
}
- Re: [PATCH v2 10/20] vfio/common: Record DMA mapped IOVA ranges, Joao Martins, 2023/03/01
- Re: [PATCH v2 10/20] vfio/common: Record DMA mapped IOVA ranges, Joao Martins, 2023/03/01
- Re: [PATCH v2 10/20] vfio/common: Record DMA mapped IOVA ranges, Alex Williamson, 2023/03/02
- Re: [PATCH v2 10/20] vfio/common: Record DMA mapped IOVA ranges, Joao Martins, 2023/03/02
- Re: [PATCH v2 10/20] vfio/common: Record DMA mapped IOVA ranges, Joao Martins, 2023/03/03
- Re: [PATCH v2 10/20] vfio/common: Record DMA mapped IOVA ranges, Alex Williamson, 2023/03/03
- Re: [PATCH v2 10/20] vfio/common: Record DMA mapped IOVA ranges, Joao Martins, 2023/03/03
- Re: [PATCH v2 10/20] vfio/common: Record DMA mapped IOVA ranges, Alex Williamson, 2023/03/03
- Re: [PATCH v2 10/20] vfio/common: Record DMA mapped IOVA ranges, Joao Martins, 2023/03/03
- Re: [PATCH v2 10/20] vfio/common: Record DMA mapped IOVA ranges, Alex Williamson, 2023/03/03
- Re: [PATCH v2 10/20] vfio/common: Record DMA mapped IOVA ranges,
Joao Martins <=
- Re: [PATCH v2 10/20] vfio/common: Record DMA mapped IOVA ranges, Joao Martins, 2023/03/03