[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v5 09/10] x68: acpi: trigger SMI before sending hotplug Notif
From: |
Laszlo Ersek |
Subject: |
Re: [PATCH v5 09/10] x68: acpi: trigger SMI before sending hotplug Notify event to OSPM |
Date: |
Tue, 8 Sep 2020 13:00:20 +0200 |
On 09/08/20 11:35, Laszlo Ersek wrote:
> On 09/08/20 09:39, Igor Mammedov wrote:
>> On Mon, 7 Sep 2020 17:17:52 +0200
>> Laszlo Ersek <lersek@redhat.com> wrote:
>
>>> 1 Method (CSCN, 0, Serialized)
>>> 2 {
>>> 3 Acquire (\_SB.PCI0.PRES.CPLK, 0xFFFF)
>>> 4 Name (CNEW, Package (0xFF){})
>>> 5 Local_Uid = Zero
>>> 6 Local_HasJob = One
>>> 7 While ((Local_HasJob == One))
>>> 8 {
>>> 9 Local_HasJob = Zero
>>> 10 Local_HasEvent = One
>>> 11 Local_NumAddedCpus = Zero
>>> 12 While (((Local_HasEvent == One) && (Local_Uid <
>>> 0x08)))
>>> 13 {
>>> 14 Local_HasEvent = Zero
>>> 15 \_SB.PCI0.PRES.CSEL = Local_Uid
>>> 16 \_SB.PCI0.PRES.CCMD = Zero
>>> 17 If ((\_SB.PCI0.PRES.CDAT < Local_Uid))
>>> 18 {
>>> 19 Break
>>> 20 }
>>> 21
>>> 22 If ((Local_NumAddedCpus == 0xFF))
>>> 23 {
>>> 24 Local_HasJob = One
>>> 25 Break
>>> 26 }
>>> 27
>>> 28 Local_Uid = \_SB.PCI0.PRES.CDAT
>>> 29 If ((\_SB.PCI0.PRES.CINS == One))
>>> 30 {
>>> 31 CNEW [Local_NumAddedCpus] = Local_Uid
>>> 32 Local_NumAddedCpus++
>>> 33 Local_HasEvent = One
>>> 34 }
>>> 35 ElseIf ((\_SB.PCI0.PRES.CRMV == One))
>>> 36 {
>>> 37 CTFY (Local_Uid, 0x03)
>>> 38 \_SB.PCI0.PRES.CRMV = One
>>> 39 Local_HasEvent = One
>>> 40 }
>>> 41
>>> 42 Local_Uid++
>>> 43 }
>>> 44
>>> 45 If ((Local_NumAddedCpus > Zero))
>>> 46 {
>>> 47 \_SB.PCI0.SMI0.SMIC = 0x04
>>> 48 }
>>> 49
>>> 50 Local_CpuIdx = Zero
>>> 51 While ((Local_CpuIdx < Local_NumAddedCpus))
>>> 52 {
>>> 53 Local_Uid = DerefOf (CNEW [Local_CpuIdx])
>>> 54 CTFY (Local_Uid, One)
>>> 55 Debug = Local_Uid
>>> 56 \_SB.PCI0.PRES.CSEL = Local_Uid
>>> 57 \_SB.PCI0.PRES.CINS = One
>>> 58 Local_CpuIdx++
>>> 59 }
>>> 60 }
>>> 61
>>> 62 Release (\_SB.PCI0.PRES.CPLK)
>>> 63 }
>>>
>>> When we take the Break on line 25, then:
>>>
>>> (a) on line 25, the following equality holds:
>>>
>>> Local_Uid == CNEW[Local_NumAddedCpus - 1] + 1
>>>
>>> (b) on line 60, the following equality holds:
>>>
>>> Local_Uid == CNEW[Local_NumAddedCpus - 1]
>>>
>>> This means that, when we write Local_Uid to CSEL on line 15 again, then:
>>>
>>> - we effectively re-investigate the last-cached CPU (with selector value
>>> CNEW[Local_NumAddedCpus - 1])
>>>
>>> - rather than resuming the scanning right after it -- that is, with
>>> selector value (CNEW[Local_NumAddedCpus - 1] + 1) --, in the spirit of
>>> line 42.
>>>
>>> My question is: is this "rewind" intentional?
>>>
>>> Now, I don't see a functionality problem with this, as on line 57, we
>>> clear the pending insert event for the last-cached CPU, so when we
>>> re-check it, the "get pending" command will simply seek past it.
>>>
>>> But I'd like to understand if this is *precisely* your intent, or if
>>> it's an oversight and it just ends up working OK.
>> it's the later (it should not have any adverse effects) so I didn't care
>> much about restarting from the last processed CPU.
>>
>> how about moving
>>
>> 22 If ((Local_NumAddedCpus == 0xFF))
>> 23 {
>> 24 Local_HasJob = One
>> 25 Break
>> 26 }
>>
>> right after
>> 40 }
>> 41
>> 42 Local_Uid++
>>
>> instead of adding extra 'if' at the end of outer loop?
>
> That only seems to save a CSEL write on line 15, during the first
> iteration of the outer loop. And we would still re-select the last
> selector from CNEW, in the second iteration of the outer loop.
>
> But, again, there's no bug; I just wanted to understand your intent.
>
> Can you please squash the following patch:
>
>> diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c
>> index 12839720018e..8dd4d8ebbf55 100644
>> --- a/hw/acpi/cpu.c
>> +++ b/hw/acpi/cpu.c
>> @@ -601,6 +601,15 @@ void build_cpus_aml(Aml *table, MachineState *machine,
>> CPUHotplugFeatures opts,
>> aml_append(while_ctx, aml_increment(cpu_idx));
>> }
>> aml_append(while_ctx2, while_ctx);
>> + /*
>> + * If another batch is needed, then it will resume scanning
>> + * exactly at -- and not after -- the last CPU that's
>> currently
>> + * in CPU_ADDED_LIST. In other words, the last CPU in
>> + * CPU_ADDED_LIST is going to be re-checked. That's OK:
>> we've
>> + * just cleared the insert event for *all* CPUs in
>> + * CPU_ADDED_LIST, including the last one. So the scan will
>> + * simply seek past it.
>> + */
>> }
>> aml_append(method, while_ctx2);
>> aml_append(method, aml_release(ctrl_lock));
>
> With that:
>
> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
>
> I'll also follow up with test results for this patch (meaning a lowered
> "max_cpus_per_pass").
Tested-by: Laszlo Ersek <lersek@redhat.com>
[PATCH v5 10/10] tests: acpi: update acpi blobs with new AML, Igor Mammedov, 2020/09/07
Re: [PATCH v5 00/10] x86: fix cpu hotplug with secure boot, Igor Mammedov, 2020/09/08
Re: [PATCH v5 00/10] x86: fix cpu hotplug with secure boot, Igor Mammedov, 2020/09/21