qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC 0/2] ARM virt: Support up to 256 PCIe buses


From: Laszlo Ersek
Subject: Re: [Qemu-devel] [RFC 0/2] ARM virt: Support up to 256 PCIe buses
Date: Wed, 23 May 2018 19:45:45 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.7.0

Hi Eric,

On 05/23/18 18:03, Eric Auger wrote:
> Current Machvirt PCI host controller's ECAM region is 16MB large.
> This limits the number of PCIe buses to 16.
>
> PC/Q35 machines have a 256MB region allowing up to 256 buses.
> This series tries to bridge the gap.
>
> It declares a new ECAM region located beyond 256GB, of size 256MB
> (just after the hypothetical new GICv3 RDIST region). The new
> ECAM region is used as soon as the highmem option is set (default)
> and disabled for machines older than 3.0.
>
> Best Regards
>
> Eric
>
> Git: complete series available at
> https://github.com/eauger/qemu/tree/v2.12.0-256MB-ECAM-RFCv1
>
> - Tested with guest running in aarch64 and aarch32 modes (aarch64=off)
> - In aarch32 mode I encountered the issue the vmalloc region may be
>   reported too small for the needs (dmesg excerpt below). So I had to
>   extend the vmalloc size by passing the "vmalloc=512M" option to the
>   bootargs and this eventually booted fine.
>
> [    1.399581] pl061_gpio 9030000.pl061: PL061 GPIO chip @0x0000000009030000 
> registered
> [    1.402636] OF: PCI: host bridge /address@hidden ranges:
> [    1.404506] OF: PCI:    IO 0x3eff0000..0x3effffff -> 0x00000000
> [    1.406606] OF: PCI:   MEM 0x10000000..0x3efeffff -> 0x10000000
> [    1.408690] OF: PCI:   MEM 0x8000000000..0xffffffffff -> 0x8000000000
> [    1.411992] vmap allocation for size 1052672 failed: use vmalloc=<size> to 
> increase size
> [    1.414895] pci-host-generic 4010000000.pcie: ECAM ioremap failed
> [    1.427472] pci-host-generic: probe of 4010000000.pcie failed with error 
> -12
>
> - Maybe this issue deserves introducing a new highmem_ecam option?

I refer to my earlier email here:

  http://mid.mail-archive.com/address@hidden

This series flips the sole ECAM range that is exposed to the guest to a
large one that is located above 4GB. That's a problem because -- to my
understanding -- it breaks 32-bit ARM UEFI builds, unless you change the
QEMU command line.

(1) Please enable the "firmware repo" from Gerd's site:

https://www.kraxel.org/repos/

(2) Please install the "edk2.git-arm" package.

(3) Please run the 32-bit ARM UEFI firmware, with qemu-system-aarch64,
in a separate directory, as follows (note: TCG only, KVM not needed):

  cp /usr/share/edk2.git/arm/vars-template-pflash.raw vars
  FWBIN=/usr/share/edk2.git/arm/QEMU_EFI-pflash.raw

  qemu-system-aarch64 \
    -nodefaults \
    -no-user-config \
    \
    -M virt \
    -cpu cortex-a15 \
    -m 1024 \
    \
    -drive if=pflash,format=raw,file=$FWBIN,readonly \
    -drive if=pflash,format=raw,file=vars \
    \
    -device virtio-gpu-pci \
    -device qemu-xhci \
    -device usb-kbd \
    \
    -chardev stdio,signal=off,mux=on,id=char0 \
    -mon chardev=char0,mode=readline \
    -serial chardev:char0

This will boot the UEFI shell for you in a graphical window and take
input from the keyboard in that window. A virtio-gpu-pci device is used
as GPU (a PCI Express virtio device) and a USB3.0 keyboard is used as
human input device (the USB3.0 controller is also PCI Express).


I didn't test it, but I expect that this series, when applied as-is,
will break the above use case, unless highmem is explicitly disabled.

I think the first patch is OK (modulo the runaway empty line at the end
of acpi_dsdt_add_pci()), while realizing my review cannot be complete.
:)

Regarding the second patch, I do believe we need "more sophistication"
there. For example, I guess it could be possible to distinguish "-cpu
cortex-a15" from "-cpu cortex-a57" somehow, and stick with the low/small
ECAM in the former case. (The 32-bit firmware already runs on cortex-a15
only, and not on cortex-a57, according to my testing.)

Thanks,
Laszlo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]