[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC 3/3] acpi-build: allocate mcfg for multiple host b

From: Marcel Apfelbaum
Subject: Re: [Qemu-devel] [RFC 3/3] acpi-build: allocate mcfg for multiple host bridges
Date: Wed, 23 May 2018 20:23:56 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.7.0

On 05/23/2018 03:28 PM, Laszlo Ersek wrote:
On 05/23/18 13:11, Zihan Yang wrote:
Hi all,
The original purpose was just to support multiple segments in Intel
Q35 archtecure for PCIe topology, which makes bus number a less scarce
resource. The patches are very primitive and many things are left for
firmware to finish(the initial plan was to implement it in SeaBIOS),
the AML part in QEMU is not finished either. I'm not familiar with
OVMF or edk2, so there is no plan to touch it yet, but it seems not
necessary since it already supports multi-segment in the end.
That's incorrect. EDK2 stands for "EFI Development Kit II", and it is a
collection of "universal" (= platform- and ISA-independent) modules
(drivers and libraries), and platfor- and/or ISA-dependent modules
(drivers and libraries). The OVMF firmware is built from a subset of
these modules; the final firmware image includes modules from both
categories -- universal modules, and modules specific to the i440fx and
Q35 QEMU boards. The first category generally lives under MdePkg/,
MdeModulePkg/, UefiCpuPkg/, NetworkPkg/, PcAtChipsetPkg, etc; while the
second category lives under OvmfPkg/.

(The exact same applies to the ArmVirtQemu firmware, with the second
category consisting of ArmVirtPkg/ and OvmfPkg/ modules.)

When we discuss anything PCI-related in edk2, it usually affects both

(a) the universal/core modules, such as

   - the PCI host bridge / root bridge driver at

   - the PCI bus driver at "MdeModulePkg/Bus/Pci/PciBusDxe",

(b) and the platform-specific modules, such as

   - "OvmfPkg/IncompatiblePciDeviceSupportDxe" which causes PciBusDxe to
     allocate 64-bit MMIO BARs above 4 GB, regardless of option ROM
     availability (as long as a CSM is not present), conserving 32-bit
     MMIO aperture for 32-bit BARs,

   - "OvmfPkg/PciHotPlugInitDxe", which implements support for QEMU's
     resource reservation hints, so that we can avoid IO space exhaustion
     with many PCIe root ports, and so that we can reserve MMIO aperture
     for hot-plugging devices with large MMIO BARs,

   - "OvmfPkg/Library/DxePciLibI440FxQ35", which is a low-level PCI
     config space access library, usable in the DXE and later phases,
     that plugs into several drivers, and uses 0xCF8/0xCFC on i440x, and
     ECAM on Q35,

   - "OvmfPkg/Library/PciHostBridgeLib", which plugs into
     "PciHostBridgeDxe" above, exposing the various resource apertures to
     said host bridge / root bridge driver, and implementing support for
     the PXB / PXBe devices,

   - "OvmfPkg/PlatformPei", which is an early (PEI phase) module with a
     grab-bag of platform support code; e.g. it informs
     "DxePciLibI440FxQ35" above about the QEMU board being Q35 vs.
     i440fx, it configures the ECAM (exbar) registers on Q35, it
     determines where the 32-bit and 64-bit PCI MMIO apertures should be;

   - "ArmVirtPkg/Library/BaseCachingPciExpressLib", which is the
     aarch64/virt counterpart of "DxePciLibI440FxQ35" above,

   - "ArmVirtPkg/Library/FdtPciHostBridgeLib", which is the aarch64/virt
     counterpart of "PciHostBridgeLib", consuming the DTB exposed by

   - "ArmVirtPkg/Library/FdtPciPcdProducerLib", which is an internal
     library that turns parts of the DTB that is exposed by
     qemu-system-aarch64 into various PCI-related, firmware-wide, scalar
     variables (called "PCDs"), upon which both
     "BaseCachingPciExpressLib" and "FdtPciHostBridgeLib" rely.

The point is that any PCI feature in any edk2 platform firmware comes
together from (a) core module support for the feature, and (b) platform
integration between the core code and the QEMU board in question.

If (a) is missing, that implies a very painful uphill battle, which is
why I'd been loudly whining, initially, in this thread, until I realized
that the core support was there in edk2, for PCIe segments.

However, (b) is required as well -- i.e., platform integration under
OvmfPkg/ and perhaps ArmVirtPkg/, between the QEMU boards and the core
edk2 code --, and that definitely doesn't exist for the PCIe segments

If (a) exists and is flexible enough, then we at least have a chance at
writing the platform support code (b) for it. So that's why I've stopped
whining. Writing (b) is never easy -- in this case, a great many of the
platform modules that I've listed above, under OvmfPkg/ pathnames, could
be affected, or even be eligible for replacement -- but (b) is at least
imaginable practice. Modules in category (a) are shipped *in* -- not
"on" -- every single physical UEFI platform that you can buy today,
which is one reason why it's hugely difficult to implement nontrivial
changes for them.

In brief: your statement is incorrect because category (b) is missing.
And that requires dedicated QEMU support, similarly to how
"OvmfPkg/PciHotPlugInitDxe" requires the vendor-specific resource
reservation capability, and how "OvmfPkg/Library/PciHostBridgeLib"
consumes the "etc/extra-pci-roots" fw_cfg file, and how most everything
that ArmVirtQemu does for PCI(e) originates from QEMU's DTB.

* 64-bit space is crowded and there are no standards within QEMU for
   placing per domain 64-bit MMIO and MMCFG ranges
* We cannot put ECAM arbitrarily high because guest's PA width is
   limited by host's when EPT is enabled.
That's right. One argument is that firmware can lay out these apertures
and ECAM ranges internally. But that argument breaks down when you hit
the PCPU physical address width, and would like the management stack,
such as libvirtd, to warn you in advance. For that, either libvirtd or
QEMU has to know, or direct, the layout.

* NUMA modeling seems to be a stronger motivation than the limitation
   of 256 but nubmers, that each NUMA node holds its own PCI(e)

NUMA modeling is not the motivation, the motivation is that each PCI
domain can have up to 256 buses and the PCI Express architecture
dictates one PCI device per bus.

The limitation we have with NUMA is that a PCI Host-Bridge can
belong to a single NUMA node.
I'd also like to get more information about this -- I thought pxb-pci(e)
was already motivated by supporting NUMA locality.

  And, to my knowledge,
pxb-pci(e) actually *solved* this problem. Am I wrong?
You are right.
  Let's say you
have 16 NUMA nodes (which seems pretty large to me); is it really
insufficient to assign ~16 devices to each node?
Is not about "Per Node limitation", it is about several scenarios:
 - We have Ray from Intel trying to use 1000 virtio-net devices (God knows why :) ).  - We may have a VM managing some backups (tapes), we may have a lot of these.
 - We may want indeed to create a nested solution as Michael mentioned.
The "main/hidden" issue: At some point we will switch to Q35 as the default X86 machine (QEMU 3.0 :) ?)
and then we don't want people to be disappointed by such a "regression".

Thanks for your time Laszlo, and sorry putting you on the spotlight.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]