[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-ppc] [PATCH for 3.1] spapr: Fix ibm, max-associativity-domains
From: |
Serhii Popovych |
Subject: |
Re: [Qemu-ppc] [PATCH for 3.1] spapr: Fix ibm, max-associativity-domains property number of nodes |
Date: |
Mon, 19 Nov 2018 18:18:04 +0200 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:51.0) Gecko/20100101 Firefox/51.0 |
Laurent Vivier wrote:
> On 19/11/2018 14:27, Greg Kurz wrote:
>> On Mon, 19 Nov 2018 08:09:38 -0500
>> Serhii Popovych <address@hidden> wrote:
>>
>>> Laurent Vivier reported off by one with maximum number of NUMA nodes
>>> provided by qemu-kvm being less by one than required according to
>>> description of "ibm,max-associativity-domains" property in LoPAPR.
>>>
>>> It appears that I incorrectly treated LoPAPR description of this
>>> property assuming it provides last valid domain (NUMA node here)
>>> instead of maximum number of domains.
>>>
>>> ### Before hot-add
>>>
>>> (qemu) info numa
>>> 3 nodes
>>> node 0 cpus: 0
>>> node 0 size: 0 MB
>>> node 0 plugged: 0 MB
>>> node 1 cpus:
>>> node 1 size: 1024 MB
>>> node 1 plugged: 0 MB
>>> node 2 cpus:
>>> node 2 size: 0 MB
>>> node 2 plugged: 0 MB
>>>
>>> $ numactl -H
>>> available: 2 nodes (0-1)
>>> node 0 cpus: 0
>>> node 0 size: 0 MB
>>> node 0 free: 0 MB
>>> node 1 cpus:
>>> node 1 size: 999 MB
>>> node 1 free: 658 MB
>>> node distances:
>>> node 0 1
>>> 0: 10 40
>>> 1: 40 10
>>>
>>> ### Hot-add
>>>
>>> (qemu) object_add memory-backend-ram,id=mem0,size=1G
>>> (qemu) device_add pc-dimm,id=dimm1,memdev=mem0,node=2
>>> (qemu) [ 87.704898] pseries-hotplug-mem: Attempting to hot-add 4 ...
>>> <there is no "Initmem setup node 2 [mem 0xHEX-0xHEX]">
>>> [ 87.705128] lpar: Attempting to resize HPT to shift 21
>>> ... <HPT resize messages>
>>>
>>> ### After hot-add
>>>
>>> (qemu) info numa
>>> 3 nodes
>>> node 0 cpus: 0
>>> node 0 size: 0 MB
>>> node 0 plugged: 0 MB
>>> node 1 cpus:
>>> node 1 size: 1024 MB
>>> node 1 plugged: 0 MB
>>> node 2 cpus:
>>> node 2 size: 1024 MB
>>> node 2 plugged: 1024 MB
>>>
>>> $ numactl -H
>>> available: 2 nodes (0-1)
>>> ^^^^^^^^^^^^^^^^^^^^^^^^
>>> Still only two nodes (and memory hot-added to node 0 below)
>>> node 0 cpus: 0
>>> node 0 size: 1024 MB
>>> node 0 free: 1021 MB
>>> node 1 cpus:
>>> node 1 size: 999 MB
>>> node 1 free: 658 MB
>>> node distances:
>>> node 0 1
>>> 0: 10 40
>>> 1: 40 10
>>>
>>> After fix applied numactl(8) reports 3 nodes available and memory
>>> plugged into node 2 as expected.
>>>
>>> Fixes: da9f80fbad21 ("spapr: Add ibm,max-associativity-domains property")
>>> Reported-by: Laurent Vivier <address@hidden>
>>> Signed-off-by: Serhii Popovych <address@hidden>
>>> ---
>>> hw/ppc/spapr.c | 2 +-
>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
>>> index 7afd1a1..843ae6c 100644
>>> --- a/hw/ppc/spapr.c
>>> +++ b/hw/ppc/spapr.c
>>> @@ -1033,7 +1033,7 @@ static void spapr_dt_rtas(sPAPRMachineState *spapr,
>>> void *fdt)
>>> cpu_to_be32(0),
>>> cpu_to_be32(0),
>>> cpu_to_be32(0),
>>> - cpu_to_be32(nb_numa_nodes ? nb_numa_nodes - 1 : 0),
>>> + cpu_to_be32(nb_numa_nodes ? nb_numa_nodes : 0),
>>
>> Maybe simply cpu_to_be32(nb_numa_nodes) ?
>
> I agree the "? : " is not needed.
>
> With "cpu_to_be32(nb_numa_nodes)":
>
Agree, ?: was relevant only to catch -1 case when running guest w/o NUMA
config. Will send v2. Thanks for quick review.
> Reviewed-by: Laurent Vivier <address@hidden>
>
> Thanks,
> Laurent
>
--
Thanks,
Serhii
signature.asc
Description: OpenPGP digital signature