qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 10/20] vfio/common: Record DMA mapped IOVA ranges


From: Joao Martins
Subject: Re: [PATCH v2 10/20] vfio/common: Record DMA mapped IOVA ranges
Date: Fri, 3 Mar 2023 19:14:50 +0000

On 03/03/2023 17:05, Alex Williamson wrote:
> On Fri, 3 Mar 2023 16:58:55 +0000
> Joao Martins <joao.m.martins@oracle.com> wrote:
> 
>> On 03/03/2023 00:19, Joao Martins wrote:
>>> On 02/03/2023 18:42, Alex Williamson wrote:  
>>>> On Thu, 2 Mar 2023 00:07:35 +0000
>>>> Joao Martins <joao.m.martins@oracle.com> wrote:  
>>>>> @@ -426,6 +427,11 @@ void vfio_unblock_multiple_devices_migration(void)
>>>>>      multiple_devices_migration_blocker = NULL;
>>>>>  }
>>>>>
>>>>> +static bool vfio_have_giommu(VFIOContainer *container)
>>>>> +{
>>>>> +    return !QLIST_EMPTY(&container->giommu_list);
>>>>> +}  
>>>>
>>>> I think it's the case, but can you confirm we build the giommu_list
>>>> regardless of whether the vIOMMU is actually enabled?
>>>>  
>>> I think that is only non-empty when we have the first IOVA mappings e.g. on
>>> IOMMU passthrough mode *I think* it's empty. Let me confirm.
>>>   
>> Yeap, it's empty.
>>
>>> Otherwise I'll have to find a TYPE_IOMMU_MEMORY_REGION object to determine 
>>> if
>>> the VM was configured with a vIOMMU or not. That is to create the LM 
>>> blocker.
>>>   
>> I am trying this way, with something like this, but neither
>> x86_iommu_get_default() nor below is really working out yet. A little afraid 
>> of
>> having to add the live migration blocker on each machine_init_done hook, 
>> unless
>> t here's a more obvious way. vfio_realize should be at a much later stage, 
>> so I
>> am surprised how an IOMMU object doesn't exist at that time.
> 
> Can we just test whether the container address space is system_memory?

IIUC, it doesn't work (see below snippet).

The problem is that you start as a regular VFIO guest, and when the guest boot
is when new mappings get established/invalidated and propagated into listeners
(vfio_listener_region_add) and they morph into having a giommu. And that's when
you can figure out in higher layers that 'you have a vIOMMU' as that's when the
address space gets changed? That is without being specific to a particular IOMMU
model. Maybe region_add is where to add, but then it then depends on the guest.

I was going to attempt at vtd_machine_done_notify_one() ?

@@ -416,9 +416,25 @@ void vfio_unblock_multiple_devices_migration(void)
     multiple_devices_migration_blocker = NULL;
 }

-static bool vfio_have_giommu(VFIOContainer *container)
+static VFIOAddressSpace *vfio_get_address_space(AddressSpace *as);
+
+int vfio_block_giommu_migration(Error **errp)
 {
-    return !QLIST_EMPTY(&container->giommu_list);
+    int ret;
+
+    if (vfio_get_address_space(&address_space_memory)) {
+        return 0;
+    }
+
+    error_setg(&giommu_migration_blocker,
+               "Migration is currently not supported with vIOMMU enabled");
+    ret = migrate_add_blocker(giommu_migration_blocker, errp);
+    if (ret < 0) {
+        error_free(giommu_migration_blocker);
+        giommu_migration_blocker = NULL;
+    }
+
+    return ret;
 }



reply via email to

[Prev in Thread] Current Thread [Next in Thread]