qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] [Qemu-devel] [PATCH 0/7] spapr: rework memory nodes


From: Nishanth Aravamudan
Subject: Re: [Qemu-ppc] [Qemu-devel] [PATCH 0/7] spapr: rework memory nodes
Date: Mon, 16 Jun 2014 17:25:00 -0700
User-agent: Mutt/1.5.21 (2010-09-15)

On 16.06.2014 [17:51:50 -0300], Eduardo Habkost wrote:
> On Mon, Jun 16, 2014 at 06:16:29PM +1000, Alexey Kardashevskiy wrote:
> > On 06/16/2014 05:53 PM, Alexey Kardashevskiy wrote:
> > > c4177479 "spapr: make sure RMA is in first mode of first memory node"
> > > introduced regression which prevents from running guests with memoryless
> > > NUMA node#0 which may happen on real POWER8 boxes and which would make
> > > sense to debug in QEMU.
> > > 
> > > This patchset aim is to fix that and also fix various code problems in
> > > memory nodes generation.
> > > 
> > > These 2 patches could be merged (the resulting patch looks rather ugly):
> > > spapr: Use DT memory node rendering helper for other nodes
> > > spapr: Move DT memory node rendering to a helper
> > > 
> > > Please comment. Thanks!
> > > 
> > 
> > Sure I forgot to add an example of what I am trying to run without errors
> > and warnings:
> > 
> > /home/aik/qemu-system-ppc64 \
> > -enable-kvm \
> > -machine pseries \
> > -nographic \
> > -vga none \
> > -drive id=id0,if=none,file=virtimg/fc20_24GB.qcow2,format=qcow2 \
> > -device scsi-disk,id=id1,drive=id0 \
> > -m 2080 \
> > -smp 8 \
> > -numa node,nodeid=0,cpus=0-7,memory=0 \
> > -numa node,nodeid=2,cpus=0-3,mem=1040 \
> > -numa node,nodeid=4,cpus=4-7,mem=1040
> 
> (Note: I will ignore the "cpus" argument for the discussion below.)
> 
> I understand now that the non-contiguous node IDs are guest-visible.
> 
> But I still would like to understand the motivations for your use case,
> to understand which solution makes more sense.
> 
> If you really want 5 nodes, you just need to write this:
>   -numa node,nodeid=0,cpus=0-7,memory=0 \
>   -numa node,nodeid=1 \
>   -numa node,nodeid=2,cpus=0-3,mem=1040 \
>   -numa node,nodeid=3 \
>   -numa node,nodeid=4,cpus=4-7,mem=1040
> 
> If you just want 3 nodes, you can just write this:
>   -numa node,nodeid=0,cpus=0-7,memory=0 \
>   -numa node,nodeid=1,cpus=0-3,mem=1040 \
>   -numa node,nodeid=4,cpus=4-7,mem=1040

No, this doesn't do what you think it would :)

nb_numa_nodes = 3

but node_mem[0] = 0
node_mem[1] = 1040
node_mem[2] = 0
node_mem[3] = 0
node_mem[4] = 1040

Because of the generic parsing of the numa options.

I'd need to look at my test case again (and this is reproducible on
x86), but I believe it's actually worse if you skip node 0 altogether,
e.g.:

   -numa node,nodeid=1,cpus=0-7,memory=0 \
   -numa node,nodeid=2,cpus=0-3,mem=1040 \
   -numa node,nodeid=4,cpus=4-7,mem=1040

Node 0 will have node 4's memory (because we put the rest there, iirc)
and the cpus that should be on node 4 are on node 0 as well).

I'll try to get the exact test results later.

In any case, it's confusing the topology you see in Linux vs. what the
command-line says.

> But you seem to claim you need 3 nodes with non-contiguous IDs. In that
> case, which exactly is the guest-visible difference you expect to get
> between:
>   -numa node,nodeid=0,cpus=0-7,memory=0 \
>   -numa node,nodeid=1 \
>   -numa node,nodeid=2,cpus=0-3,mem=1040 \
>   -numa node,nodeid=3 \
>   -numa node,nodeid=4,cpus=4-7,mem=1040

I guess here you'd see 5 NUMA nodes in Linux, with 0, 1 and 3 having no
memory.

> and
>   -numa node,nodeid=0,cpus=0-7,memory=0 \
>   -numa node,nodeid=2,cpus=0-3,mem=1040 \
>   -numa node,nodeid=4,cpus=4-7,mem=1040
> ?

And here you'd see 3 NUMA nodes in Linux, with 0 having no memory. I
would think the principle of least surprise means qemu doesn't change
the topology from the user-requested one without any indicate that's
happening?

Thanks,
Nish




reply via email to

[Prev in Thread] Current Thread [Next in Thread]