qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 10/20] vfio/common: Record DMA mapped IOVA ranges


From: Alex Williamson
Subject: Re: [PATCH v2 10/20] vfio/common: Record DMA mapped IOVA ranges
Date: Tue, 28 Feb 2023 13:36:53 -0700

On Tue, 28 Feb 2023 12:11:06 +0000
Joao Martins <joao.m.martins@oracle.com> wrote:

> On 23/02/2023 21:50, Alex Williamson wrote:
> > On Thu, 23 Feb 2023 21:19:12 +0000
> > Joao Martins <joao.m.martins@oracle.com> wrote:  
> >> On 23/02/2023 21:05, Alex Williamson wrote:  
> >>> On Thu, 23 Feb 2023 10:37:10 +0000
> >>> Joao Martins <joao.m.martins@oracle.com> wrote:    
> >>>> On 22/02/2023 22:10, Alex Williamson wrote:    
> >>>>> On Wed, 22 Feb 2023 19:49:05 +0200
> >>>>> Avihai Horon <avihaih@nvidia.com> wrote:      
> >>>>>> From: Joao Martins <joao.m.martins@oracle.com>
> >>>>>> @@ -612,6 +665,16 @@ static int vfio_dma_map(VFIOContainer *container, 
> >>>>>> hwaddr iova,
> >>>>>>          .iova = iova,
> >>>>>>          .size = size,
> >>>>>>      };
> >>>>>> +    int ret;
> >>>>>> +
> >>>>>> +    ret = vfio_record_mapping(container, iova, size, readonly);
> >>>>>> +    if (ret) {
> >>>>>> +        error_report("vfio: Failed to record mapping, iova: 0x%" 
> >>>>>> HWADDR_PRIx
> >>>>>> +                     ", size: 0x" RAM_ADDR_FMT ", ret: %d (%s)",
> >>>>>> +                     iova, size, ret, strerror(-ret));
> >>>>>> +
> >>>>>> +        return ret;
> >>>>>> +    }      
> >>>>>
> >>>>> Is there no way to replay the mappings when a migration is started?
> >>>>> This seems like a horrible latency and bloat trade-off for the
> >>>>> possibility that the VM might migrate and the device might support
> >>>>> these features.  Our performance with vIOMMU is already terrible, I
> >>>>> can't help but believe this makes it worse.  Thanks,
> >>>>>       
> >>>>
> >>>> It is a nop if the vIOMMU is being used (entries in 
> >>>> container->giommu_list) as
> >>>> that uses a max-iova based IOVA range. So this is really for iommu 
> >>>> identity
> >>>> mapping and no-VIOMMU.    
> >>>
> >>> Ok, yes, there are no mappings recorded for any containers that have a
> >>> non-empty giommu_list.
> >>>     
> >>>> We could replay them if they were tracked/stored anywhere.    
> >>>
> >>> Rather than piggybacking on vfio_memory_listener, why not simply
> >>> register a new MemoryListener when migration is started?  That will
> >>> replay all the existing ranges and allow tracking to happen separate
> >>> from mapping, and only when needed.
> >>>     
> >>
> >> The problem with that is that *starting* dirty tracking needs to have all 
> >> the
> >> range, we aren't supposed to start each range separately. So on a memory
> >> listener callback you don't have introspection when you are dealing with 
> >> the
> >> last range, do we?  
> > 
> > As soon as memory_listener_register() returns, all your callbacks to
> > build the IOVATree have been called and you can act on the result the
> > same as if you were relying on the vfio mapping MemoryListener.  I'm
> > not seeing the problem.  Thanks,
> >   
> 
> While doing these changes, the nice thing of the current patch is that 
> whatever
> changes apply to vfio_listener_region_add() will be reflected in the mappings
> tree that stores what we will dirty track. If we move the mappings calculation
> necessary for dirty tracking only when we start, we will have to duplicate the
> same checks, and open for bugs where we ask things to be dirty track-ed that
> haven't been DMA mapped. These two aren't necessarily tied, but felt like I
> should raise the potentially duplication of the checks (and the same thing
> applies for handling virtio-mem and what not).
> 
> I understand that if we were going to store *a lot* of mappings that this 
> would
> add up in space requirements. But for no-vIOMMU (or iommu=pt) case this is 
> only
> about 12ranges or so, it is much simpler to piggyback the existing listener.
> Would you still want to move this to its own dedicated memory listener?

Code duplication and bugs are good points, but while typically we're
only seeing a few handfuls of ranges, doesn't virtio-mem in particular
allow that we could be seeing quite a lot more?

We used to be limited to a fairly small number of KVM memory slots,
which effectively bounded non-vIOMMU DMA mappings, but that value is
now 2^15, so we need to anticipate that we could see many more than a
dozen mappings.

Can we make the same argument that the overhead is negligible if a VM
makes use of 10s of GB of virtio-mem with 2MB block size?

But then on a 4KB host we're limited to 256 tracking entries, so
wasting all that time and space on a runtime IOVATree is even more
dubious.

In fact, it doesn't really matter that vfio_listener_region_add and
this potentially new listener come to the same result, as long as the
new listener is a superset of the existing listener.  So I think we can
simplify out a lot of the places we'd see duplication and bugs.  I'm
not even really sure why we wouldn't simplify things further and only
record a single range covering the low and high memory marks for a
non-vIOMMU VMs, or potentially an approximation removing gaps of 1GB or
more, for example.  Thanks,

Alex




reply via email to

[Prev in Thread] Current Thread [Next in Thread]