qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Qemu-ppc] [PATCH for 3.1] spapr: Fix ibm, max-associat


From: Laurent Vivier
Subject: Re: [Qemu-devel] [Qemu-ppc] [PATCH for 3.1] spapr: Fix ibm, max-associativity-domains property number of nodes
Date: Mon, 19 Nov 2018 14:48:34 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.3.0

On 19/11/2018 14:27, Greg Kurz wrote:
> On Mon, 19 Nov 2018 08:09:38 -0500
> Serhii Popovych <address@hidden> wrote:
> 
>> Laurent Vivier reported off by one with maximum number of NUMA nodes
>> provided by qemu-kvm being less by one than required according to
>> description of "ibm,max-associativity-domains" property in LoPAPR.
>>
>> It appears that I incorrectly treated LoPAPR description of this
>> property assuming it provides last valid domain (NUMA node here)
>> instead of maximum number of domains.
>>
>>   ### Before hot-add
>>
>>   (qemu) info numa
>>   3 nodes
>>   node 0 cpus: 0
>>   node 0 size: 0 MB
>>   node 0 plugged: 0 MB
>>   node 1 cpus:
>>   node 1 size: 1024 MB
>>   node 1 plugged: 0 MB
>>   node 2 cpus:
>>   node 2 size: 0 MB
>>   node 2 plugged: 0 MB
>>
>>   $ numactl -H
>>   available: 2 nodes (0-1)
>>   node 0 cpus: 0
>>   node 0 size: 0 MB
>>   node 0 free: 0 MB
>>   node 1 cpus:
>>   node 1 size: 999 MB
>>   node 1 free: 658 MB
>>   node distances:
>>   node   0   1
>>     0:  10  40
>>     1:  40  10
>>
>>   ### Hot-add
>>
>>   (qemu) object_add memory-backend-ram,id=mem0,size=1G
>>   (qemu) device_add pc-dimm,id=dimm1,memdev=mem0,node=2
>>   (qemu) [   87.704898] pseries-hotplug-mem: Attempting to hot-add 4 ...
>>   <there is no "Initmem setup node 2 [mem 0xHEX-0xHEX]">
>>   [   87.705128] lpar: Attempting to resize HPT to shift 21
>>   ... <HPT resize messages>
>>
>>   ### After hot-add
>>
>>   (qemu) info numa
>>   3 nodes
>>   node 0 cpus: 0
>>   node 0 size: 0 MB
>>   node 0 plugged: 0 MB
>>   node 1 cpus:
>>   node 1 size: 1024 MB
>>   node 1 plugged: 0 MB
>>   node 2 cpus:
>>   node 2 size: 1024 MB
>>   node 2 plugged: 1024 MB
>>
>>   $ numactl -H
>>   available: 2 nodes (0-1)
>>   ^^^^^^^^^^^^^^^^^^^^^^^^
>>              Still only two nodes (and memory hot-added to node 0 below)
>>   node 0 cpus: 0
>>   node 0 size: 1024 MB
>>   node 0 free: 1021 MB
>>   node 1 cpus:
>>   node 1 size: 999 MB
>>   node 1 free: 658 MB
>>   node distances:
>>   node   0   1
>>     0:  10  40
>>     1:  40  10
>>
>> After fix applied numactl(8) reports 3 nodes available and memory
>> plugged into node 2 as expected.
>>
>> Fixes: da9f80fbad21 ("spapr: Add ibm,max-associativity-domains property")
>> Reported-by: Laurent Vivier <address@hidden>
>> Signed-off-by: Serhii Popovych <address@hidden>
>> ---
>>  hw/ppc/spapr.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
>> index 7afd1a1..843ae6c 100644
>> --- a/hw/ppc/spapr.c
>> +++ b/hw/ppc/spapr.c
>> @@ -1033,7 +1033,7 @@ static void spapr_dt_rtas(sPAPRMachineState *spapr, 
>> void *fdt)
>>          cpu_to_be32(0),
>>          cpu_to_be32(0),
>>          cpu_to_be32(0),
>> -        cpu_to_be32(nb_numa_nodes ? nb_numa_nodes - 1 : 0),
>> +        cpu_to_be32(nb_numa_nodes ? nb_numa_nodes : 0),
> 
> Maybe simply cpu_to_be32(nb_numa_nodes) ?

Or "cpu_to_be32(nb_numa_nodes ? nb_numa_nodes : 1)" ?

In spapr_populate_drconf_memory() we have this logic.

Thanks,
Laurent



reply via email to

[Prev in Thread] Current Thread [Next in Thread]