qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 10/20] vfio/common: Record DMA mapped IOVA ranges


From: Joao Martins
Subject: Re: [PATCH v2 10/20] vfio/common: Record DMA mapped IOVA ranges
Date: Fri, 3 Mar 2023 16:58:55 +0000

On 03/03/2023 00:19, Joao Martins wrote:
> On 02/03/2023 18:42, Alex Williamson wrote:
>> On Thu, 2 Mar 2023 00:07:35 +0000
>> Joao Martins <joao.m.martins@oracle.com> wrote:
>>> @@ -426,6 +427,11 @@ void vfio_unblock_multiple_devices_migration(void)
>>>      multiple_devices_migration_blocker = NULL;
>>>  }
>>>
>>> +static bool vfio_have_giommu(VFIOContainer *container)
>>> +{
>>> +    return !QLIST_EMPTY(&container->giommu_list);
>>> +}
>>
>> I think it's the case, but can you confirm we build the giommu_list
>> regardless of whether the vIOMMU is actually enabled?
>>
> I think that is only non-empty when we have the first IOVA mappings e.g. on
> IOMMU passthrough mode *I think* it's empty. Let me confirm.
> 
Yeap, it's empty.

> Otherwise I'll have to find a TYPE_IOMMU_MEMORY_REGION object to determine if
> the VM was configured with a vIOMMU or not. That is to create the LM blocker.
> 
I am trying this way, with something like this, but neither
x86_iommu_get_default() nor below is really working out yet. A little afraid of
having to add the live migration blocker on each machine_init_done hook, unless
t here's a more obvious way. vfio_realize should be at a much later stage, so I
am surprised how an IOMMU object doesn't exist at that time.

@@ -416,9 +421,26 @@ void vfio_unblock_multiple_devices_migration(void)
     multiple_devices_migration_blocker = NULL;
 }

-static bool vfio_have_giommu(VFIOContainer *container)
+int vfio_block_giommu_migration(Error **errp)
 {
-    return !QLIST_EMPTY(&container->giommu_list);
+    int ret;
+
+    if (!object_resolve_path_type("", TYPE_INTEL_IOMMU_DEVICE, NULL) ||
+        !object_resolve_path_type("", TYPE_AMD_IOMMU_DEVICE, NULL) ||
+        !object_resolve_path_type("", TYPE_ARM_SMMU, NULL) ||
+        !object_resolve_path_type("", TYPE_VIRTIO_IOMMU, NULL)) {
+       return 0;
+    }
+
+    error_setg(&giommu_migration_blocker,
+               "Migration is currently not supported with vIOMMU enabled");
+    ret = migrate_add_blocker(giommu_migration_blocker, errp);
+    if (ret < 0) {
+        error_free(giommu_migration_blocker);
+        giommu_migration_blocker = NULL;
+    }
+
+    return ret;
 }

diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
index 8981ae71a6f8..127a44ccaf19 100644
--- a/hw/vfio/migration.c
+++ b/hw/vfio/migration.c
@@ -649,6 +649,11 @@ int vfio_migration_probe(VFIODevice *vbasedev, Error 
**errp)
         return ret;
     }

+    ret = vfio_block_giommu_migration(errp);
+    if (ret) {
+        return ret;
+    }
+
     trace_vfio_migration_probe(vbasedev->name);
     return 0;



reply via email to

[Prev in Thread] Current Thread [Next in Thread]