qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/2] hw/pci-host/x86: extend the 64-bit PCI hole


From: Laszlo Ersek
Subject: Re: [Qemu-devel] [PATCH 0/2] hw/pci-host/x86: extend the 64-bit PCI hole relative to the fw-assigned base
Date: Tue, 25 Sep 2018 19:31:34 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1

On 09/25/18 17:04, Michael S. Tsirkin wrote:
> On Tue, Sep 25, 2018 at 12:13:44AM +0200, Laszlo Ersek wrote:
>> This is based on the discussion in the "[Qemu-devel] 64-bit MMIO
>> aperture expansion" thread, which starts at
>> <http://mid.mail-archive.com/address@hidden>.
>>
>> Cc: "Michael S. Tsirkin" <address@hidden>
>> Cc: Alex Williamson <address@hidden>
>> Cc: Marcel Apfelbaum <address@hidden>
> 
> Mentioning
> https://bugs.launchpad.net/qemu/+bug/1778350
> 
> here - do any of these patches help?

Thanks for the reference.

I'm going to add an RFT (request for testing) to that LP soon. However,
I find your remark
<https://bugs.launchpad.net/qemu/+bug/1778350/comments/6> instructive:
"I looked at it and while I might be wrong, I suspect it's a bug in ACPI
parser in that version of Linux."

In the ACPI builder, we create the qword memory descriptor for the _CRS
only if the 64-bit hole is not empty.

(The exact expression for gating the descriptor's generation has gone
through a number of iterations, but AFAICS, the condition was first
added in commit 60efd4297d44, "pc: acpi-build: create PCI0._CRS
dynamically", 2015-03-01.)

Therefore, if an ACPI parser chokes on a qword memory descriptor in a
_CRS in general, then 9fa99d2519cb would trigger that issue. And,
setting "x-pci-hole64-fix=off" would mask it again.

This series does not change *when* the memory descriptor is generated;
it only changes *how* (with what contents) it is generated, when it is
generated. So I don't expect it to make a difference for LP#1778350.

But, I'll ask the reporter to apply this and test it.

Thanks!
Laszlo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]