qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 10/20] vfio/common: Record DMA mapped IOVA ranges


From: Joao Martins
Subject: Re: [PATCH v2 10/20] vfio/common: Record DMA mapped IOVA ranges
Date: Thu, 23 Feb 2023 21:54:14 +0000

On 23/02/2023 21:50, Alex Williamson wrote:
> On Thu, 23 Feb 2023 21:19:12 +0000
> Joao Martins <joao.m.martins@oracle.com> wrote:
> 
>> On 23/02/2023 21:05, Alex Williamson wrote:
>>> On Thu, 23 Feb 2023 10:37:10 +0000
>>> Joao Martins <joao.m.martins@oracle.com> wrote:  
>>>> On 22/02/2023 22:10, Alex Williamson wrote:  
>>>>> On Wed, 22 Feb 2023 19:49:05 +0200
>>>>> Avihai Horon <avihaih@nvidia.com> wrote:    
>>>>>> From: Joao Martins <joao.m.martins@oracle.com>
>>>>>> @@ -612,6 +665,16 @@ static int vfio_dma_map(VFIOContainer *container, 
>>>>>> hwaddr iova,
>>>>>>          .iova = iova,
>>>>>>          .size = size,
>>>>>>      };
>>>>>> +    int ret;
>>>>>> +
>>>>>> +    ret = vfio_record_mapping(container, iova, size, readonly);
>>>>>> +    if (ret) {
>>>>>> +        error_report("vfio: Failed to record mapping, iova: 0x%" 
>>>>>> HWADDR_PRIx
>>>>>> +                     ", size: 0x" RAM_ADDR_FMT ", ret: %d (%s)",
>>>>>> +                     iova, size, ret, strerror(-ret));
>>>>>> +
>>>>>> +        return ret;
>>>>>> +    }    
>>>>>
>>>>> Is there no way to replay the mappings when a migration is started?
>>>>> This seems like a horrible latency and bloat trade-off for the
>>>>> possibility that the VM might migrate and the device might support
>>>>> these features.  Our performance with vIOMMU is already terrible, I
>>>>> can't help but believe this makes it worse.  Thanks,
>>>>>     
>>>>
>>>> It is a nop if the vIOMMU is being used (entries in 
>>>> container->giommu_list) as
>>>> that uses a max-iova based IOVA range. So this is really for iommu identity
>>>> mapping and no-VIOMMU.  
>>>
>>> Ok, yes, there are no mappings recorded for any containers that have a
>>> non-empty giommu_list.
>>>   
>>>> We could replay them if they were tracked/stored anywhere.  
>>>
>>> Rather than piggybacking on vfio_memory_listener, why not simply
>>> register a new MemoryListener when migration is started?  That will
>>> replay all the existing ranges and allow tracking to happen separate
>>> from mapping, and only when needed.
>>>   
>>
>> The problem with that is that *starting* dirty tracking needs to have all the
>> range, we aren't supposed to start each range separately. So on a memory
>> listener callback you don't have introspection when you are dealing with the
>> last range, do we?
> 
> As soon as memory_listener_register() returns, all your callbacks to
> build the IOVATree have been called and you can act on the result the
> same as if you were relying on the vfio mapping MemoryListener.  I'm
> not seeing the problem.  Thanks,

I was just checking memory_global_dirty_log_start() (as when dirty tracking
starts getting enabled) and yes you're definitively right.

I thought this was asynchronous given that there are so many mrs, but I must be
confusing with something elsewhere.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]