qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 10/20] vfio/common: Record DMA mapped IOVA ranges


From: Joao Martins
Subject: Re: [PATCH v2 10/20] vfio/common: Record DMA mapped IOVA ranges
Date: Thu, 23 Feb 2023 21:19:12 +0000

On 23/02/2023 21:05, Alex Williamson wrote:
> On Thu, 23 Feb 2023 10:37:10 +0000
> Joao Martins <joao.m.martins@oracle.com> wrote:
>> On 22/02/2023 22:10, Alex Williamson wrote:
>>> On Wed, 22 Feb 2023 19:49:05 +0200
>>> Avihai Horon <avihaih@nvidia.com> wrote:  
>>>> From: Joao Martins <joao.m.martins@oracle.com>
>>>> @@ -612,6 +665,16 @@ static int vfio_dma_map(VFIOContainer *container, 
>>>> hwaddr iova,
>>>>          .iova = iova,
>>>>          .size = size,
>>>>      };
>>>> +    int ret;
>>>> +
>>>> +    ret = vfio_record_mapping(container, iova, size, readonly);
>>>> +    if (ret) {
>>>> +        error_report("vfio: Failed to record mapping, iova: 0x%" 
>>>> HWADDR_PRIx
>>>> +                     ", size: 0x" RAM_ADDR_FMT ", ret: %d (%s)",
>>>> +                     iova, size, ret, strerror(-ret));
>>>> +
>>>> +        return ret;
>>>> +    }  
>>>
>>> Is there no way to replay the mappings when a migration is started?
>>> This seems like a horrible latency and bloat trade-off for the
>>> possibility that the VM might migrate and the device might support
>>> these features.  Our performance with vIOMMU is already terrible, I
>>> can't help but believe this makes it worse.  Thanks,
>>>   
>>
>> It is a nop if the vIOMMU is being used (entries in container->giommu_list) 
>> as
>> that uses a max-iova based IOVA range. So this is really for iommu identity
>> mapping and no-VIOMMU.
> 
> Ok, yes, there are no mappings recorded for any containers that have a
> non-empty giommu_list.
> 
>> We could replay them if they were tracked/stored anywhere.
> 
> Rather than piggybacking on vfio_memory_listener, why not simply
> register a new MemoryListener when migration is started?  That will
> replay all the existing ranges and allow tracking to happen separate
> from mapping, and only when needed.
> 

The problem with that is that *starting* dirty tracking needs to have all the
range, we aren't supposed to start each range separately. So on a memory
listener callback you don't have introspection when you are dealing with the
last range, do we?

>> I suppose we could move the vfio_devices_all_device_dirty_tracking() into 
>> this
>> patch and then conditionally call this vfio_{record,erase}_mapping() in case 
>> we
>> are passing through a device that doesn't have live-migration support? Would
>> that address the impact you're concerned wrt to non-live-migrateable devices?
>>
>> On the other hand, the PCI device hotplug hypothetical even makes this a bit
>> complicated as we can still attempt to hotplug a device before migration is 
>> even
>> attempted. Meaning that we start with live-migrateable devices, and we added 
>> the
>> tracking, up to hotpluging a device without such support (adding a blocker)
>> leaving the mappings there with no further use. So it felt simpler to just 
>> track
>> always and avoid any mappings recording if the vIOMMU is in active use?
> 
> My preference would be that there's no runtime overhead for migration
> support until a migration is initiated.  I currently don't see why we
> can't achieve that by dynamically adding a new MemoryListener around
> migration for that purpose.  Do you?  Thanks,

I definitely agree with the general sentiment of being more dynamic, but perhaps
I am not seeing how.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]