[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-ppc] [PATCH 0/7] spapr: rework memory nodes
From: |
Nishanth Aravamudan |
Subject: |
Re: [Qemu-ppc] [PATCH 0/7] spapr: rework memory nodes |
Date: |
Mon, 16 Jun 2014 11:26:58 -0700 |
User-agent: |
Mutt/1.5.21 (2010-09-15) |
On 16.06.2014 [18:16:29 +1000], Alexey Kardashevskiy wrote:
> On 06/16/2014 05:53 PM, Alexey Kardashevskiy wrote:
> > c4177479 "spapr: make sure RMA is in first mode of first memory node"
> > introduced regression which prevents from running guests with memoryless
> > NUMA node#0 which may happen on real POWER8 boxes and which would make
> > sense to debug in QEMU.
> >
> > This patchset aim is to fix that and also fix various code problems in
> > memory nodes generation.
> >
> > These 2 patches could be merged (the resulting patch looks rather ugly):
> > spapr: Use DT memory node rendering helper for other nodes
> > spapr: Move DT memory node rendering to a helper
> >
> > Please comment. Thanks!
> >
>
> Sure I forgot to add an example of what I am trying to run without errors
> and warnings:
<snip>
> -numa node,nodeid=0,cpus=0-7,memory=0 \
> -numa node,nodeid=2,cpus=0-3,mem=1040 \
> -numa node,nodeid=4,cpus=4-7,mem=1040
Semantically, what does this mean? CPUs 0-3 are on both node 0 and node
2? I didn't think the NUMA spec allowed that? Or does qemu's
command-line take the "last" specified assignment of a CPU to a nodeid?
Perhaps unrelated to your changes, but I think it would be most sensible
here to error out if a CPU is assigned to multiple NUMA nodes.
<snip>
> address@hidden ~]# numactl --hardware
>
> available: 3 nodes (0,2,4)
> node 0 cpus:
> node 0 size: 0 MB
> node 0 free: 0 MB
> node 2 cpus: 0 1 2 3
> node 2 size: 1021 MB
> node 2 free: 610 MB
> node 4 cpus: 4 5 6 7
> node 4 size: 1038 MB
> node 4 free: 881 MB
> node distances:
> node 0 2 4
> 0: 10 40 40
> 2: 40 10 40
> 4: 40 40 10
>
>
> Seems correct except that weird node#0 which comes I do not where from.
Well, Linux has a statically online Node 0, which if no CPUs or memory
are assigned to it, will show up as above as a cpuless and memoryless
node. That's not a bug in qemu, and is something I'm looking into
upstream in the kernel.
> And the patchset is made agains agraf/ppc-next tree.
[Qemu-ppc] [PATCH 5/7] spapr: Add a helper for node0_size calculation, Alexey Kardashevskiy, 2014/06/16
[Qemu-ppc] [PATCH 2/7] spapr: Use DT memory node rendering helper for other nodes, Alexey Kardashevskiy, 2014/06/16
[Qemu-ppc] [PATCH 3/7] spapr: Refactor spapr_populate_memory(), Alexey Kardashevskiy, 2014/06/16
[Qemu-ppc] [PATCH 1/7] spapr: Move DT memory node rendering to a helper, Alexey Kardashevskiy, 2014/06/16
[Qemu-ppc] [PATCH 6/7] spapr: Fix ibm, associativity for memory nodes, Alexey Kardashevskiy, 2014/06/16
Re: [Qemu-ppc] [PATCH 0/7] spapr: rework memory nodes, Alexey Kardashevskiy, 2014/06/16
Re: [Qemu-ppc] [Qemu-devel] [PATCH 0/7] spapr: rework memory nodes, Alexey Kardashevskiy, 2014/06/17
Re: [Qemu-ppc] [Qemu-devel] [PATCH 0/7] spapr: rework memory nodes, Eduardo Habkost, 2014/06/17
Re: [Qemu-ppc] [Qemu-devel] [PATCH 0/7] spapr: rework memory nodes, Nishanth Aravamudan, 2014/06/17
Re: [Qemu-ppc] [Qemu-devel] [PATCH 0/7] spapr: rework memory nodes, Eduardo Habkost, 2014/06/17