qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH v2 0/4] Use ACPI PCI hot-plug for q35


From: Laszlo Ersek
Subject: Re: [RFC PATCH v2 0/4] Use ACPI PCI hot-plug for q35
Date: Mon, 24 Aug 2020 17:03:12 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0 Thunderbird/52.9.1

On 08/24/20 13:51, Ani Sinha wrote:
> On Mon, Aug 24, 2020 at 5:06 PM Igor Mammedov <imammedo@redhat.com> wrote:
>>
>> On Sat, 22 Aug 2020 16:25:55 +0200
>> Laszlo Ersek <lersek@redhat.com> wrote:
>>
>>> +Marcel, Laine, Daniel
>>>
>>> On 08/21/20 12:30, Igor Mammedov wrote:
>>>> On Tue, 18 Aug 2020 23:52:23 +0200
>>>> Julia Suvorova <jusual@redhat.com> wrote:
>>>>
>>>>> PCIe native hot-plug has numerous problems with racing events and
>>>>> unpredictable guest behaviour (Windows).
>>>> Documenting these misterious problems I've asked for  in previous
>>>> review hasn't been addressed.
>>>> Pls see v1 for comments and add requested info into cover letter at
>>>> least or in a commit message.
>>>
>>> Igor, I assume you are referring to
>>>
>>>   http://mid.mail-archive.com/20200715153321.3495e62d@redhat.com
>>>
>>> and I couldn't agree more.
>>>
>>> I'd like to understand the specific motivation for this patch series.
>>>
>>> - I'm very concerned that it could regress various hotplug scenarios
>>> with at least OVMF.
>>>
>>> So minimally I'm hoping that the work is being meticulously tested with
>>> OVMF.
>>>
>>> - I don't recall testing native PCIe hot-*unplug*, but we had repeatedly
>>> tested native PCIe plug with both Linux and Windows guests, and in the
>>> end, it worked fine. (I recall working with at least Marcel on that; one
>>> historical reference I can find now is
>>> <https://bugzilla.tianocore.org/show_bug.cgi?id=75>.)
>>>
>>> I remember users confirming that native PCIe hotplug worked with
>>> assigned physical devices even (e.g. GPUs), assuming they made use of
>>> the resource reservation capability (e.g. they'd reserve large MMIO64
>>> areas during initial enumeration).
>>>
>>> - I seem to remember that we had tested hotplug on extra root bridges
>>> (PXB) too; regressing that -- per the pxb-pcie mention in the blurb,
>>> quoted below -- wouldn't be great. At least, please don't flip the big
>>> switch so soon (IIUC, there is a big switch being proposed).
>>
>> I'm suggesting to make ACPI hotplug on q35 opt-in,
>> becasue it's only Windows guests that doesn't work well with unplug.
>> Linux guests seems to be just fine with native hotplug.
>>
>>> - The documentation at "docs/pcie.txt" and "docs/pcie_pci_bridge.txt" is
>>> chock-full of hotplug references; we had spent days if not weeks for
>>> writing and reviewing those. I hope it's being evaluated how much of
>>> that is going to need an update.
>>>
>>> In particular, do we know how this work is going to affect the resource
>>> reservation capability?
>> My hunch is that should not be affected (but I will not bet on it).
>> ACPI hotplug just changes route hotplug event is delivered, and unplug
>> happens via ACPI as well. That works around drivers offlining/onlining
>> devices in rapid succession during native unplug (that's quite crude
>> justification).
>>
>> So I'd like reasons to be well documented, including what ideas were
>> tried to fix or workaround those issues (beside ACPI one), so the next
>> time we look at it we don't have to start from ground up.
>>
>>
>>> $ qemu-system-x86_64 -device pcie-root-port,\? | grep reserve
>>>   bus-reserve=<uint32>   -  (default: 4294967295)
>>>   io-reserve=<size>      -  (default: 18446744073709551615)
>>>   mem-reserve=<size>     -  (default: 18446744073709551615)
>>>   pref32-reserve=<size>  -  (default: 18446744073709551615)
>>>   pref64-reserve=<size>  -  (default: 18446744073709551615)
>>>
>>> The OVMF-side code (OvmfPkg/PciHotPlugInitDxe) was tough to write. As
>>> far as I remember, especially commit fe4049471bdf
>>> ("OvmfPkg/PciHotPlugInitDxe: translate QEMU's resource reservation
>>> hints", 2017-10-03) had taken a lot of navel-gazing. So the best answer
>>> I'm looking for here is "this series does not affect resource
>>> reservation at all".
>>>
>>> - If my message is suggesting that I'm alarmed: that's an
>>> understatement. This stuff is a mine-field. A good example is Gerd's
>>> (correct!) response "Oh no, please don't" to Igor's question in the v1
>>> thread, as to whether the piix4 IO port range could be reused:
>>>
>>>   
>>> http://mid.mail-archive.com/20200715065751.ogchtdqmnn7cxsyi@sirius.home.kraxel.org
>>>
>>> That kind of "reuse" would have been a catastrophe, because for PCI IO
>>> port aperture, OVMF uses [0xC000..0xFFFF] on i440fx, but
>>> [0x6000..0xFFFF] on q35:
>>>
>>>   commit bba734ab4c7c9b4386d39420983bf61484f65dda
>>>   Author: Laszlo Ersek <lersek@redhat.com>
>>>   Date:   Mon May 9 22:54:36 2016 +0200
>>>
>>>       OvmfPkg/PlatformPei: provide 10 * 4KB of PCI IO Port space on Q35
>>>
>>>       This can accommodate 10 bridges (including root bridges, PCIe 
>>> upstream and
>>>       downstream ports, etc -- see
>>>       <https://bugzilla.redhat.com/show_bug.cgi?id=1333238#c12> for more
>>>       details).
>>>
>>>       10 is not a whole lot, but closer to the architectural limit of 15 
>>> than
>>>       our current 4, so it can be considered a stop-gap solution until all
>>>       guests manage to migrate to virtio-1.0, and no longer need PCI IO BARs
>>>       behind PCIe downstream ports.
>>>
>>>       Cc: Gabriel Somlo <somlo@cmu.edu>
>>>       Cc: Jordan Justen <jordan.l.justen@intel.com>
>>>       Ref: https://bugzilla.redhat.com/show_bug.cgi?id=1333238
>>>       Contributed-under: TianoCore Contribution Agreement 1.0
>>>       Signed-off-by: Laszlo Ersek <lersek@redhat.com>
>>>       Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
>>>       Tested-by: Gabriel Somlo <somlo@cmu.edu>
>>>
>>> - If native PCIe hot-unplug is not working well (or at all) right now,
>>> then I guess I can't just summarily say "we had better drop this like
>>> hot potato".
>>>
>>> But then, if we are committed to *juggling* that potato, we should at
>>> least document the use case / motivation / current issues meticulously,
>>> please. Do we have a public BZ at least?
>>>
>>> - The other work, with regard to *disabling* unplug, which seems to be
>>> progressing in parallel, is similarly in need of a good explanation, in
>>> my opinion:
>>>
>>>   20200820092157.17792-1-ani@anisinha.ca">http://mid.mail-archive.com/20200820092157.17792-1-ani@anisinha.ca
>>>
>>> Yes, I have read Laine's long email, linked from the QEMU cover letter:
>>>
>>>   https://www.redhat.com/archives/libvir-list/2020-February/msg00110.html
>>>
>>> The whole use case "prevent guest admins from unplugging virtual
>>> devices" still doesn't make any sense to me. How is "some cloud admins
>>> don't want that" acceptable at face value, without further explanation?
>> My take on it that, Windows always exposes unplug icon, and lets VM users
>> to unplug PCI devices. Notably, user is able to click away the only NIC
>> VM was configured with.
> 
> Correct. Also sometimes the admins may not want some other PCI devices
> to be hot unpluggable such as the balloon device.
> 
>>
>> Unfortunatly the 'feature' can't be fixed on guest side,
> 
> It can be using driver hacks but they are very operating system
> specific and also needs to be applied per VM everytime they are
> powered on.
> 
> that's why
>> hypervisors implement such hack to disable ACPI hotplug. Which I guess
>> is backed by demand from users deploying Windows virtual desktops.
>>
>> PS:
>> I'd made PCI hotplug opt-in, since not everyone needs it.
>> But that ship sailed long ago.

Thank you both for explaining.

All of these use cases seems justified to me.

Given that they are basically quirks, for addressing guest OS specific
peculiarities, changing machine type defaults does not seem warranted.
In my opinion, all of these bits should be opt-in. If we need to capture
permanent recommendations, we can always document them, and/or libosinfo
can expose them machine-readably.

Thanks
Laszlo




reply via email to

[Prev in Thread] Current Thread [Next in Thread]