qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Could not add PCI device with big memory to aarch64 VMs


From: liang yan
Subject: Re: [Qemu-devel] Could not add PCI device with big memory to aarch64 VMs
Date: Mon, 30 Nov 2015 11:45:18 -0700
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.2.0



On 11/04/2015 05:53 PM, Laszlo Ersek wrote:
On 11/04/15 23:22, liang yan wrote:
Hello, Laszlo,


(2)It also has a problem that once I use a memory bigger than 256M for
ivshmem, it could not get through UEFI,
the error message is

PciBus: Discovered PCI @ [00|01|00]
    BAR[0]: Type =  Mem32; Alignment = 0xFFF;    Length = 0x100; Offset =
0x10
    BAR[1]: Type =  Mem32; Alignment = 0xFFF;    Length = 0x1000; Offset
= 0x14
    BAR[2]: Type = PMem64; Alignment = 0x3FFFFFFF;    Length =
0x40000000;    Offset = 0x18

PciBus: HostBridge->SubmitResources() - Success
ASSERT
/home/liang/studio/edk2/ArmVirtPkg/PciHostBridgeDxe/PciHostBridge.c(449): 
((BOOLEAN)(0==1))


I am wandering if there are memory limitation for pcie devices under
Qemu environment?


Just thank you in advance and any information would be appreciated.
(CC'ing Ard.)

"Apparently", the firmware-side counterpart of QEMU commit 5125f9cd2532
has never been contributed to edk2.

Therefore the the ProcessPciHost() function in
"ArmVirtPkg/VirtFdtDxe/VirtFdtDxe.c" ignores the
DTB_PCI_HOST_RANGE_MMIO64 type range from the DTB. (Thus only
DTB_PCI_HOST_RANGE_MMIO32 is recognized as PCI MMIO aperture.)

However, even if said driver was extended to parse the new 64-bit
aperture into PCDs (which wouldn't be hard), the
ArmVirtPkg/PciHostBridgeDxe driver would still have to be taught to look
at that aperture (from the PCDs) and to serve MMIO BAR allocation
requests from it. That could be hard.

Please check edk2 commits e48f1f15b0e2^..e5ceb6c9d390, approximately,
for the background on the current code. See also chapter 13 "Protocols -
PCI Bus Support" in the UEFI spec.

Patches welcome. :)

(A separate note on ACPI vs. DT: the firmware forwards *both* from QEMU
to the runtime guest OS. However, the firmware parses only the DT for
its own purposes.)
Hello, Laszlo,

Thanks for your advices above, it's very helpful.

When debugging, I also found some problems for 32 bit PCI devices.
Hope could get some clues from you.

I checked on 512M, 1G, and 2G devices.(4G return invalid parameter error, so I think it may be taken as a 64bit devices, is this right?).


First,

All devices start from base address 3EFEFFFF.

ProcessPciHost: Config[0x3F000000+0x1000000) Bus[0x0..0xF] Io[0x0+0x10000)@0x3EFF0000 Mem[0x10000000+0x2EFF0000)@0x0

PcdPciMmio32Base is  10000000=====================
PcdPciMmio32Size is  2EFF0000=====================


Second,

It could not get new base address when searching memory space in GCD map.

For 512M devices,

*BaseAddress = (*BaseAddress + 1 - Length) & (~AlignmentMask);

BaseAddress is 3EFEFFFF==========================
new BaseAddress is 1EEF0000==========================
~AlignmentMask is E0000000==========================
Final BaseAddress is 0000

Status = CoreSearchGcdMapEntry (*BaseAddress, Length, &StartLink, &EndLink, Map);



For bigger devices:

all stops when searching memory space because below code, Length will bigger than MaxAddress(3EFEFFFF)

if ((Entry->BaseAddress + Length) > MaxAddress) {
         continue;
}


I also checked on ArmVirtQemu.dsc which all set to 0.

  gArmPlatformTokenSpaceGuid.PcdPciBusMin|0x0
  gArmPlatformTokenSpaceGuid.PcdPciBusMax|0x0
  gArmPlatformTokenSpaceGuid.PcdPciIoBase|0x0
  gArmPlatformTokenSpaceGuid.PcdPciIoSize|0x0
  gArmPlatformTokenSpaceGuid.PcdPciIoTranslation|0x0
  gArmPlatformTokenSpaceGuid.PcdPciMmio32Base|0x0
  gArmPlatformTokenSpaceGuid.PcdPciMmio32Size|0x0
  gEfiMdePkgTokenSpaceGuid.PcdPciExpressBaseAddress|0x0


Do you think I should change from PcdPciMmio32Base and PcdPciMmio32Size, or do some change for GCD entry list, so it could allocate resources for PCI devices(CoreSearchGcdMapEntry)?


Looking forward to your reply.


Thanks,
Liang

Thanks
Laszlo





reply via email to

[Prev in Thread] Current Thread [Next in Thread]