qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [SeaBIOS] [RFC v2 0/3] Support multiple pci domains in


From: Marcel Apfelbaum
Subject: Re: [Qemu-devel] [SeaBIOS] [RFC v2 0/3] Support multiple pci domains in pci_device
Date: Tue, 28 Aug 2018 08:37:39 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1

Hi Gerd

On 08/28/2018 07:12 AM, Zihan Yang wrote:
Gerd Hoffmann <address@hidden> 于2018年8月27日周一 上午7:04写道:
   Hi,

   However, QEMU only binds port 0xcf8 and 0xcfc to
bus pcie.0. To avoid bus confliction, we should use other port pairs for
busses under new domains.
I would skip support for IO based configuration and use only MMCONFIG
for extra root buses.

The question remains: how do we assign MMCONFIG space for
each PCI domain.

Thanks for your comments!

Allocation-wise it would be easiest to place them above 4G.  Right after
memory, or after etc/reserved-memory-end (if that fw_cfg file is
present), where the 64bit pci bars would have been placed.  Move the pci
bars up in address space to make room.

Only problem is that seabios wouldn't be able to access mmconfig then.

Placing them below 4G would work at least for a few pci domains.  q35
mmconfig bar is placed at 0xb0000000 -> 0xbfffffff, basically for
historical reasons.  Old qemu versions had 2.75G low memory on q35 (up
to 0xafffffff), and I think old machine types still have that for live
migration compatibility reasons.  Modern qemu uses 2G only, to make
gigabyte alignment work.

32bit pci bars are placed above 0xc0000000.  The address space from 2G
to 2.75G (0x8000000 -> 0xafffffff) is unused on new machine types.
Enough room for three additional mmconfig bars (full size), so four
pci domains total if you add the q35 one.
Maybe we can support 4 domains first before we come up
with a better solution. But I'm not sure if four domains are
enough for those who want too many devices?

(Adding Michael)

Since we will not use all 256 buses of an extra PCI domain,
I think this space will allow us to support more PCI domains.

How will the flow look like ?

1. QEMU passes to SeaBIOS information of how many extra
   PCI domains needs, and how many buses per domain.
   How it will pass this info? A vendor specific capability,
   some PCI registers or modifying extra-pci-roots fw_cfg file?

2. SeaBIOS assigns the address for each PCI Domain and
    returns the information to QEMU.
    How it will do that? Some pxb-pcie registers? Or do we model
    the MMCFG like a PCI BAR?

3. Once QEMU gets the MMCFG addresses, it can answer to
    mmio configuration cycles.

4. SeaBIOS queries all PCI domains devices, computes
   and assigns IO/MEM resources (for PCI domains > 0 it will
   use MMCFG to configure the PCI devices)

5. QEMU uses the IO/MEM information to create the CRS for each
    extra PCI host bridge.

6. SeaBIOS gets the ACPI tables from QEMU and passes them to the
   guest OS.

Thanks,
Marcel





cheers,
   Gerd





reply via email to

[Prev in Thread] Current Thread [Next in Thread]