qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC 3/3] acpi-build: allocate mcfg for multiple host b


From: Laszlo Ersek
Subject: Re: [Qemu-devel] [RFC 3/3] acpi-build: allocate mcfg for multiple host bridges
Date: Tue, 22 May 2018 21:51:47 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.7.0

On 05/22/18 21:01, Marcel Apfelbaum wrote:
> Hi Laszlo,
> 
> On 05/22/2018 12:52 PM, Laszlo Ersek wrote:
>> On 05/21/18 13:53, Marcel Apfelbaum wrote:
>>>
>>> On 05/20/2018 10:28 AM, Zihan Yang wrote:
>>>> Currently only q35 host bridge us allocated space in MCFG table. To
>>>> put pxb host
>>>> into sepratate pci domain, each of them should have its own
>>>> configuration space
>>>> int MCFG table
>>>>
>>>> Signed-off-by: Zihan Yang <address@hidden>
>>>> ---
>>>>    hw/i386/acpi-build.c | 83
>>>> +++++++++++++++++++++++++++++++++++++++-------------
>>>>    1 file changed, 62 insertions(+), 21 deletions(-)
>>>>
>>>> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
>>>> index 9bc6d97..808d815 100644
>>>> --- a/hw/i386/acpi-build.c
>>>> +++ b/hw/i386/acpi-build.c
>>>> @@ -89,6 +89,7 @@
>>>>    typedef struct AcpiMcfgInfo {
>>>>        uint64_t mcfg_base;
>>>>        uint32_t mcfg_size;
>>>> +    struct AcpiMcfgInfo *next;
>>>>    } AcpiMcfgInfo;
>>>>      typedef struct AcpiPmInfo {
>>>> @@ -2427,14 +2428,15 @@ build_mcfg_q35(GArray *table_data, BIOSLinker
>>>> *linker, AcpiMcfgInfo *info)
>>>>    {
>>>>        AcpiTableMcfg *mcfg;
>>>>        const char *sig;
>>>> -    int len = sizeof(*mcfg) + 1 * sizeof(mcfg->allocation[0]);
>>>> +    int len, count = 0;
>>>> +    AcpiMcfgInfo *cfg = info;
>>>>    +    while (cfg) {
>>>> +        ++count;
>>>> +        cfg = cfg->next;
>>>> +    }
>>>> +    len = sizeof(*mcfg) + count * sizeof(mcfg->allocation[0]);
>>>>        mcfg = acpi_data_push(table_data, len);
>>>> -    mcfg->allocation[0].address = cpu_to_le64(info->mcfg_base);
>>>> -    /* Only a single allocation so no need to play with segments */
>>>> -    mcfg->allocation[0].pci_segment = cpu_to_le16(0);
>>>> -    mcfg->allocation[0].start_bus_number = 0;
>>>> -    mcfg->allocation[0].end_bus_number =
>>>> PCIE_MMCFG_BUS(info->mcfg_size - 1);
>>>>          /* MCFG is used for ECAM which can be enabled or disabled by
>>>> guest.
>>>>         * To avoid table size changes (which create migration issues),
>>>> @@ -2448,6 +2450,17 @@ build_mcfg_q35(GArray *table_data, BIOSLinker
>>>> *linker, AcpiMcfgInfo *info)
>>>>        } else {
>>>>            sig = "MCFG";
>>>>        }
>>>> +
>>>> +    count = 0;
>>>> +    while (info) {
>>>> +        mcfg[count].allocation[0].address =
>>>> cpu_to_le64(info->mcfg_base);
>>>> +        /* Only a single allocation so no need to play with
>>>> segments */
>>>> +        mcfg[count].allocation[0].pci_segment = cpu_to_le16(count);
>>>> +        mcfg[count].allocation[0].start_bus_number = 0;
>>>> +        mcfg[count++].allocation[0].end_bus_number =
>>>> PCIE_MMCFG_BUS(info->mcfg_size - 1);
>>> An interesting point is if we want to limit the MMFCG size for PXBs, as
>>> we may not be
>>> interested to use all the buses in a specific domain. For each bus we
>>> require some
>>> address space that remains unused.
>>>
>>>> +        info = info->next;
>>>> +    }
>>>> +
>>>>        build_header(linker, table_data, (void *)mcfg, sig, len, 1,
>>>> NULL, NULL);
>>>>    }
>>>>    @@ -2602,26 +2615,52 @@ struct AcpiBuildState {
>>>>        MemoryRegion *linker_mr;
>>>>    } AcpiBuildState;
>>>>    -static bool acpi_get_mcfg(AcpiMcfgInfo *mcfg)
>>>> +static inline void cleanup_mcfg(AcpiMcfgInfo *mcfg)
>>>> +{
>>>> +    AcpiMcfgInfo *tmp;
>>>> +    while (mcfg) {
>>>> +        tmp = mcfg->next;
>>>> +        g_free(mcfg);
>>>> +        mcfg = tmp;
>>>> +    }
>>>> +}
>>>> +
>>>> +static AcpiMcfgInfo *acpi_get_mcfg(void)
>>>>    {
>>>>        Object *pci_host;
>>>>        QObject *o;
>>>> +    AcpiMcfgInfo *head = NULL, *tail, *mcfg;
>>>>          pci_host = acpi_get_i386_pci_host();
>>>>        g_assert(pci_host);
>>>>    -    o = object_property_get_qobject(pci_host, PCIE_HOST_MCFG_BASE,
>>>> NULL);
>>>> -    if (!o) {
>>>> -        return false;
>>>> +    while (pci_host) {
>>>> +        mcfg = g_new0(AcpiMcfgInfo, 1);
>>>> +        mcfg->next = NULL;
>>>> +        if (!head) {
>>>> +            tail = head = mcfg;
>>>> +        } else {
>>>> +            tail->next = mcfg;
>>>> +            tail = mcfg;
>>>> +        }
>>>> +
>>>> +        o = object_property_get_qobject(pci_host,
>>>> PCIE_HOST_MCFG_BASE, NULL);
>>>> +        if (!o) {
>>>> +            cleanup_mcfg(head);
>>>> +            g_free(mcfg);
>>>> +            return NULL;
>>>> +        }
>>>> +        mcfg->mcfg_base = qnum_get_uint(qobject_to(QNum, o));
>>>> +        qobject_unref(o);
>>>> +
>>>> +        o = object_property_get_qobject(pci_host,
>>>> PCIE_HOST_MCFG_SIZE, NULL);
>>> I'll let Igor to comment on the APCI bits, but the idea of querying each
>>> PCI host
>>> for the MMFCG details and put it into a different table sounds good
>>> to me.
>>>
>>> [Adding Laszlo for his insights]
>> Thanks for the CC -- I don't have many (positive) insights here to
>> offer, I'm afraid. First, the patch set (including the blurb) doesn't
>> seem to explain *why* multiple host bridges / ECAM areas are a good
>> idea.
> 
> The issue we want to solve is the hard limitation of 256 PCIe devices
> that can be used in a Q35 machine.

Implying that there are use cases for which ~256 PCIe devices aren't
enough. I remain unconvinced until proved wrong :)

Anyway, a significant source of waste comes from the restriction that we
can only put 1 device (with up to 8 functions) on each non-root-complex
PCI Express bus (such as root ports and downstream ports). This forfeits
a huge portion of the ECAM area (about 31/32th) that we already have.
Rather than spending more MMIO guest-phys address range on new
discontiguous ECAM ranges, I'd prefer if we could look into ARI. I seem
to recall from your earlier presentation that ARI could recover that
lost address space (meaning both ECAM ranges and PCIe B/D/F address space).

There are signs that the edk2 core supports ARI if the underlying
platform supports it. (Which is not the case with multiple PCIe domains
/ multiple ECAM ranges.)

ARI support could also help aarch64/virt. Eric (CC'd) has been working
on raising the max PCIe bus count from *16* to 256 for aarch64/virt, and
AFAIR one of the challenges there has been finding a good spot for the
larger ECAM in guest-phys address space. Fiddling with such address maps
is always a pain.

Back to x86, the guest-phys address space is quite crowded too. The
32-bit PCI MMIO aperture (with the neighboring ECAM and platform MMIO
areas such as LAPIC, IO-APIC, HPET, TPM, pflash, ...) is always a scarce
resource. Plus, reaching agreement between OVMF and QEMU on the exact
location of the 32-bit PCI MMIO aperture has always been a huge pain; so
you'd likely shoot for 64-bit.

But 64-bit is ill-partitioned and/or crowded too: first you have the
cold-plugged >4GB DRAM (whose size the firmware can interrogate), then
the hotplug DIMM area up to "etc/reserved-memory-end" (ditto), then the
64-bit PCI MMIO aperture (whose size the firmware decides itself,
because QEMU cannot be asked about it presently). Placing the additional
MMCFGs somewhere would need extending the total order between these
things at the design level, more fw_cfg files, more sanity checks in
platform code in the firmware, and, quite importantly, adding support to
multiple discontiguous MMCFGs to core edk2 code.

I don't know much about ARI but it already looks a whole lot more
attractive to me.

> We can use "PCI functions" to increase
> the number, but then we loose the hot-plugging granularity.
> 
> Currently pxb-pcie host bridges share the same PCI domain (0)
> bus numbers, so adding more of them will not result in more usable
> devices (each PCIe device resides on its own bus),
> but putting each of them on a different domain removes
> that limitation.
> 
> Another possible use (there is a good chance I am wrong, adding Alex to
> correct me),
> may be the "modeling" of a multi-function assigned device.
> Currently all the functions of an assigneddevice will be placed on
> different PCIe
> Root Ports "loosing" the connection between them.
> 
> However, if we put all the functions of the same assigned device in the
> same
> PCI domain we may be able to "re-connect" them.

OK -- why is that useful? What's the current practical problem with the
splitting you describe?

>>   Second, supporting such would take very intrusive patches / large
>> feature development for OVMF (and that's not just for OvmfPkg and
>> ArmVirtPkg platform code, but also core MdeModulePkg code).
>>
>> Obviously I'm not saying this feature should not be implemented for QEMU
>> + SeaBIOS, just that don't expect edk2 support for it anytime soon.
>> Without a convincing use case, even the review impact seems hard to
>> justify.
> 
> I understand, thank you for the feedback, the point was to be sure
> we don't decide something that *can't be done* in OVMF.

Semantics :) Obviously, everything "can be done" in software; that's why
it's *soft*ware. Who is going to write it in practice? It had taken
years until the edk2 core gained a universal PciHostBridgeDxe driver
with a well-defined platform customization interface, and that interface
doesn't support multiple domains / segments. It had also taken years
until the same edk2 core driver gained support for nonzero MMIO address
translation (i.e. where the CPU view of system RAM addresses differs
from the PCI device view of the same, for DMA purposes) -- and that's
just a linear translation, not even about IOMMUs. The PCI core
maintainers in edk2 are always very busy, and upstreaming such changes
tends to take forever.

Again, I'm not opposing this feature per se. I just see any potential
support for it in edk2 very difficult. I remain unconvinced that ~256
PCIe devices aren't enough. Even assuming I'm proved plain wrong there,
I'd still much prefer ARI (although I don't know much about it at this
point), because (a) the edk2 core already shows signs that it intends to
support ARI, (b) ARI would help aarch64/virt with nearly the same
effort, (c) it seems that ARI would address the problem (namely, wasting
MMCFG) head-on, rather than throwing yet more MMCFG at it.

My apologies if I'm completely wrong about ARI (although that wouldn't
change anything about the difficulties that are foreseeable in edk2 for
supporting multiple host bridges).

Thanks,
Laszlo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]