[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v3 04/10] vfio: Query and store the maximum number of DMA map
From: |
David Hildenbrand |
Subject: |
Re: [PATCH v3 04/10] vfio: Query and store the maximum number of DMA mappings |
Date: |
Thu, 17 Dec 2020 20:04:28 +0100 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.5.0 |
On 17.12.20 18:55, Alex Williamson wrote:
> On Wed, 16 Dec 2020 15:11:54 +0100
> David Hildenbrand <david@redhat.com> wrote:
>
>> Let's query the maximum number of DMA mappings by querying the available
>> mappings when creating the container.
>>
>> In addition, count the number of DMA mappings and warn when we would
>> exceed it. This is a preparation for RamDiscardMgr which might
>> create quite some DMA mappings over time, and we at least want to warn
>> early that the QEMU setup might be problematic. Use "reserved"
>> terminology, so we can use this to reserve mappings before they are
>> actually created.
>
> This terminology doesn't make much sense to me, we're not actually
> performing any kind of reservation.
I see you spotted the second user which actually performs reservations.
>
>> Note: don't reserve vIOMMU DMA mappings - using the vIOMMU region size
>> divided by the mapping page size might be a bad indication of what will
>> happen in practice - we might end up warning all the time.
>
> This suggests we're not really tracking DMA "reservations" at all.
> Would something like dma_regions_mappings be a more appropriate
> identifier for the thing you're trying to count? We might as well also
Right now I want to count
- Mappings we know we will definitely have (counted in this patch)
- Mappings we know we could eventually have later (RamDiscardMgr)
> keep a counter for dma_iommu_mappings where the sum of those two should
> stay below dma_max_mappings.
We could, however, tracking active IOMMU mappings when removing a memory
region from the address space isn't easily possible - we do a single
vfio_dma_unmap() which might span multiple mappings. Same applies to
RamDiscardMgr. Hard to count how many mappings we actually *currently*
have using that approach.
>
>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>> Cc: "Michael S. Tsirkin" <mst@redhat.com>
>> Cc: Alex Williamson <alex.williamson@redhat.com>
>> Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
>> Cc: Igor Mammedov <imammedo@redhat.com>
>> Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
>> Cc: Peter Xu <peterx@redhat.com>
>> Cc: Auger Eric <eric.auger@redhat.com>
>> Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
>> Cc: teawater <teawaterz@linux.alibaba.com>
>> Cc: Marek Kedzierski <mkedzier@redhat.com>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>> ---
>> hw/vfio/common.c | 34 ++++++++++++++++++++++++++++++++++
>> include/hw/vfio/vfio-common.h | 2 ++
>> 2 files changed, 36 insertions(+)
>>
>> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
>> index 6ff1daa763..5ad88d476f 100644
>> --- a/hw/vfio/common.c
>> +++ b/hw/vfio/common.c
>> @@ -288,6 +288,26 @@ const MemoryRegionOps vfio_region_ops = {
>> },
>> };
>>
>> +static void vfio_container_dma_reserve(VFIOContainer *container,
>> + unsigned long dma_mappings)
>> +{
>> + bool warned = container->dma_reserved > container->dma_max;
>> +
>> + container->dma_reserved += dma_mappings;
>> + if (!warned && container->dma_max &&
>> + container->dma_reserved > container->dma_max) {
>> + warn_report("%s: possibly running out of DMA mappings. "
>> + " Maximum number of DMA mappings: %d", __func__,
>> + container->dma_max);
>
> If we kept track of all the mappings we could predict better than
> "possibly". Tracing support to track a high water mark might be useful
> too.
It's an early warning for reservations e.g., on fundamental setup issues
with virtio-mem.
[...]
>> diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h
>> index 6141162d7a..fed0e85f66 100644
>> --- a/include/hw/vfio/vfio-common.h
>> +++ b/include/hw/vfio/vfio-common.h
>> @@ -88,6 +88,8 @@ typedef struct VFIOContainer {
>> uint64_t dirty_pgsizes;
>> uint64_t max_dirty_bitmap_size;
>> unsigned long pgsizes;
>> + unsigned int dma_max;
>> + unsigned long dma_reserved;
>
> If dma_max is unsigned int, why do we need an unsigned long to track
> how many are in use? Thanks,
My thinking was that some vfio IOMMU types don't have such a limit
(dma_max == -1) and could allow for more. But thinking again, such a
huge number of mappings is highly unrealistic :)
--
Thanks,
David / dhildenb
[PATCH v3 06/10] vfio: Support for RamDiscardMgr in the vIOMMU case, David Hildenbrand, 2020/12/16
[PATCH v3 07/10] softmmu/physmem: Don't use atomic operations in ram_block_discard_(disable|require), David Hildenbrand, 2020/12/16
[PATCH v3 08/10] softmmu/physmem: Extend ram_block_discard_(require|disable) by two discard types, David Hildenbrand, 2020/12/16