qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] intel-iommu: optimize nodmar memory regions


From: Peter Xu
Subject: Re: [Qemu-devel] [PATCH] intel-iommu: optimize nodmar memory regions
Date: Fri, 15 Mar 2019 14:07:40 +0800
User-agent: Mutt/1.10.1 (2018-07-13)

On Wed, Mar 13, 2019 at 12:21:34PM +0100, Paolo Bonzini wrote:
> On 13/03/19 10:43, Peter Xu wrote:
> > Previously we have per-device system memory aliases when DMAR is
> > disabled by the system.  It will slow the system down if there are
> > lots of devices especially when DMAR is disabled, because each of the
> > aliased system address space will contain O(N) slots, and rendering
> > such N address spaces will be O(N^2) complexity.
> > 
> > This patch introduces a shared nodmar memory region and for each
> > device we only create an alias to the shared memory region.  With the
> > aliasing, QEMU memory core API will be able to detect when devices are
> > sharing the same address space (which is the nodmar address space)
> > when rendering the FlatViews and the total number of FlatViews can be
> > dramatically reduced when there are a lot of devices.
> > 
> > Suggested-by: Paolo Bonzini <address@hidden>
> > Signed-off-by: Peter Xu <address@hidden>
> > ---
> > 
> > Hi, Sergio,
> > 
> > This patch implements the optimization that Paolo proposed in the
> > other thread.  Would you please try this patch to see whether it could
> > help for your case?  Thanks,
> 
> Yes, this looks great.  Sergio, if you have time to test it it would
> be great.  With this patch, we switch between few big flatviews at
> boot (before IOMMU is loaded) or with iommu=pt, and many small flatviews
> after IOMMU is loaded and iommu!=pt.  Both should be fine for performance,
> and in particular the first should give no penalty at all compared to
> no IOMMU.
> 
> I only made a very small change to give different names to the various
> -dmar regions, and queued the patch:
> 
> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> index f87b1033f6..e38c27e39c 100644
> --- a/hw/i386/intel_iommu.c
> +++ b/hw/i386/intel_iommu.c
> @@ -2947,7 +2947,7 @@ VTDAddressSpace *vtd_find_add_as(IntelIOMMUState *s, 
> PCIBus *bus, int devfn)
>      vtd_dev_as = vtd_bus->dev_as[devfn];
>  
>      if (!vtd_dev_as) {
> -        snprintf(name, sizeof(name), "vtd-as-%02x.%x", PCI_SLOT(devfn),
> +        snprintf(name, sizeof(name), "vtd-%02x.%x", PCI_SLOT(devfn),
>                   PCI_FUNC(devfn));
>          vtd_bus->dev_as[devfn] = vtd_dev_as = 
> g_malloc0(sizeof(VTDAddressSpace));
>  
> @@ -2983,9 +2983,10 @@ VTDAddressSpace *vtd_find_add_as(IntelIOMMUState *s, 
> PCIBus *bus, int devfn)
>           * region here just like what we've done above with the nodmar
>           * region.
>           */
> +        strcat(name, "-dmar");
>          memory_region_init_iommu(&vtd_dev_as->iommu, 
> sizeof(vtd_dev_as->iommu),
>                                   TYPE_INTEL_IOMMU_MEMORY_REGION, OBJECT(s),
> -                                 "vtd-dmar", UINT64_MAX);
> +                                 name, UINT64_MAX);
>          memory_region_init_alias(&vtd_dev_as->iommu_ir, OBJECT(s), "vtd-ir",
>                                   &s->mr_ir, 0, 
> memory_region_size(&s->mr_ir));
>          
> memory_region_add_subregion_overlap(MEMORY_REGION(&vtd_dev_as->iommu),

Thanks Paolo. :)

It's a pity that we can't append the PCI bus numbers to those names
too probably because PCI bus numbers are not allocated before BIOS
when vtd_find_add_as() is called the first time.  It would be nicer if
we can fix it some day (though I still have no good idea on how).

Regards,

-- 
Peter Xu



reply via email to

[Prev in Thread] Current Thread [Next in Thread]