qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC 0/3] acpi: cphp: add CPHP_GET_CPU_ID_CMD command to cpu hotplug


From: Laszlo Ersek
Subject: Re: [RFC 0/3] acpi: cphp: add CPHP_GET_CPU_ID_CMD command to cpu hotplug MMIO interface
Date: Fri, 11 Oct 2019 08:54:23 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1

On 10/10/19 17:57, Igor Mammedov wrote:
> On Thu, 10 Oct 2019 09:59:42 -0400
> "Michael S. Tsirkin" <address@hidden> wrote:
> 
>> On Thu, Oct 10, 2019 at 03:39:12PM +0200, Igor Mammedov wrote:
>>> On Thu, 10 Oct 2019 05:56:55 -0400
>>> "Michael S. Tsirkin" <address@hidden> wrote:
>>>
>>>> On Wed, Oct 09, 2019 at 09:22:49AM -0400, Igor Mammedov wrote:
>>>>> As an alternative to passing to firmware topology info via new fwcfg files
>>>>> so it could recreate APIC IDs based on it and order CPUs are enumerated,
>>>>>
>>>>> extend CPU hotplug interface to return APIC ID as response to the new 
>>>>> command
>>>>> CPHP_GET_CPU_ID_CMD.  
>>>>
>>>> One big piece missing here is motivation:
>>> I thought the only willing reader was Laszlo (who is aware of context)
>>> so I skipped on details and confused others :/
>>>
>>>> Who's going to use this interface?
>>> In current state it's for firmware, since ACPI tables can cheat
>>> by having APIC IDs statically built in.
>>>
>>> If we were creating CPU objects in ACPI dynamically
>>> we would be using this command as well.
>>
>> I'm not sure how it's even possible to create devices dynamically. Well
>> I guess it's possible with LoadTable. Is this what you had in
>> mind?
> 
> Yep. I even played this shiny toy and I can say it's very tempting one.
> On the  other side, even problem of legacy OSes not working with it aside,
> it's hard to debug and reproduce compared to static tables.
> So from maintaining pov I dislike it enough to be against it.
> 
> 
>>> It would save
>>> us quite a bit space in ACPI blob but it would be a pain
>>> to debug and diagnose problems in ACPI tables, so I'd rather
>>> stay with static CPU descriptions in ACPI tables for the sake
>>> of maintenance.
>>>> So far CPU hotplug was used by the ACPI, so we didn't
>>>> really commit to a fixed interface too strongly.
>>>>
>>>> Is this a replacement to Laszlo's fw cfg interface?
>>>> If yes is the idea that OVMF going to depend on CPU hotplug directly then?
>>>> It does not depend on it now, does it?
>>> It doesn't, but then it doesn't support cpu hotplug,
>>> OVMF(SMM) needs to cooperate with QEMU "and" ACPI tables to perform
>>> the task and using the same interface/code path between all involved
>>> parties makes the task easier with the least amount of duplicated
>>> interfaces and more robust.
>>>
>>> Re-implementing alternative interface for firmware (fwcfg or what not)
>>> would work as well, but it's only question of time when ACPI and
>>> this new interface disagree on how world works and process falls
>>> apart.
>>
>> Then we should consider switching acpi to use fw cfg.
>> Or build another interface that can scale.
> 
> Could be an option, it would be a pain to write a driver in AML for fwcfg 
> access though
> (I've looked at possibility to access fwcfg from AML about a year ago and 
> gave up.
> I'm definitely not volunteering for the second attempt and can't even give an 
> estimate
> it it's viable approach).
> 
> But what scaling issue you are talking about, exactly?
> With current CPU hotplug interface we can handle upto UNIT32_MAX cpus, and 
> extend
> interface without need to increase IO window we are using now.
> 
> Granted IO access it not fastest compared to fwcfg in DMA mode, but we already
> doing stop machine when switching to SMM which is orders of magnitude slower.
> Consensus was to compromise on speed of CPU hotplug versus more complex and 
> more
> problematic unicast SMM mode in OVMF (can't find a particular email but we 
> have discussed
> it with Laszlo already, when I considered ways to optimize hotplug speed)

Right, the speed of handling a CPU hotplug event is basically
irrelevant, whereas broadcast SMI (in response to writing IO port 0xB2)
is really important.

Thanks
Laszlo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]