qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 0/4] NUMA: Apply socket-NUMA-node boundary for aarch64 and


From: Gavin Shan
Subject: Re: [PATCH v2 0/4] NUMA: Apply socket-NUMA-node boundary for aarch64 and RiscV machines
Date: Fri, 24 Feb 2023 16:47:15 +1100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.2.0

On 2/23/23 11:57 PM, Daniel P. Berrangé wrote:
On Thu, Feb 23, 2023 at 04:13:57PM +0800, Gavin Shan wrote:
For arm64 and RiscV architecture, the driver (/base/arch_topology.c) is
used to populate the CPU topology in the Linux guest. It's required that
the CPUs in one socket can't span mutiple NUMA nodes. Otherwise, the Linux
scheduling domain can't be sorted out, as the following warning message
indicates. To avoid the unexpected confusion, this series attempts to
rejects such kind of insane configurations.

    -smp 6,maxcpus=6,sockets=2,clusters=1,cores=3,threads=1 \
    -numa node,nodeid=0,cpus=0-1,memdev=ram0                \
    -numa node,nodeid=1,cpus=2-3,memdev=ram1                \
    -numa node,nodeid=2,cpus=4-5,memdev=ram2                \

This is somewhat odd as a config, because core 2 is in socket 0
and core 3 is in socket 1, so it wouldn't make much conceptual
sense to have them in the same NUMA node, while other cores within
the same socket are in different NUMA nodes. Basically the split
of NUMA nodes is not aligned with any level in the topology.

This series, however, also rejects configurations that I would
normally consider to be reasonable. I've not tested linux kernel
behaviour though, but as a user I would expect to be able to
ask for:

     -smp 6,maxcpus=6,sockets=2,clusters=1,cores=3,threads=1 \
     -numa node,nodeid=0,cpus=0,memdev=ram0                \
     -numa node,nodeid=1,cpus=1,memdev=ram1                \
     -numa node,nodeid=2,cpus=2,memdev=ram2                \
     -numa node,nodeid=3,cpus=3,memdev=ram3                \
     -numa node,nodeid=4,cpus=4,memdev=ram4                \
     -numa node,nodeid=5,cpus=5,memdev=ram5                \

ie, every core gets its own NUMA node


It doesn't work to Linux guest either. As the following warning message
indicates, the Multicore domain isn't a subset of DIE (CLUSTER or socket)
domain. For example, Multicore domain is 0-2 while DIE domain is 0 for
CPU-0.

[    0.023486] CPU-0: 36,56,0,-1 thread=0  core=0-2  cluster=0-2 llc=0    // 
parsed from ACPI PPTT
[    0.023490] CPU-1: 36,56,1,-1 thread=1  core=0-2  cluster=0-2 llc=1
[    0.023492] CPU-2: 36,56,2,-1 thread=2  core=0-2  cluster=0-2 llc=2
[    0.023494] CPU-3: 136,156,3,-1 thread=3  core=3-5  cluster=3-5 llc=3
[    0.023495] CPU-4: 136,156,4,-1 thread=4  core=3-5  cluster=3-5 llc=4
[    0.023497] CPU-5: 136,156,5,-1 thread=5  core=3-5  cluster=3-5 llc=5
[    0.023499] CPU-0: SMT=0  CLUSTER=0  MULTICORE=0-2  DIE=0  CPU-OF-NODE=0     
 // Seen by scheduling domain
[    0.023501] CPU-1: SMT=1  CLUSTER=1  MULTICORE=0-2  DIE=1  CPU-OF-NODE=1
[    0.023503] CPU-2: SMT=2  CLUSTER=2  MULTICORE=0-2  DIE=2  CPU-OF-NODE=2
[    0.023504] CPU-3: SMT=3  CLUSTER=3  MULTICORE=3-5  DIE=3  CPU-OF-NODE=3
[    0.023506] CPU-4: SMT=4  CLUSTER=4  MULTICORE=3-5  DIE=4  CPU-OF-NODE=4
[    0.023508] CPU-5: SMT=5  CLUSTER=5  MULTICORE=3-5  DIE=5  CPU-OF_NODE=5
        :
[    0.023555] BUG: arch topology borken
[    0.023556]      the MC domain not a subset of the DIE domain

NOTE that both DIE and CPU-OF-NODE are same since they're all returned by
'cpumask_of_node(cpu_to_node(cpu))'.


Or to aask for every cluster as a numa node:

     -smp 6,maxcpus=6,sockets=2,clusters=3,cores=1,threads=1 \
     -numa node,nodeid=0,cpus=0,memdev=ram0                \
     -numa node,nodeid=1,cpus=1,memdev=ram1                \
     -numa node,nodeid=2,cpus=2,memdev=ram2                \
     -numa node,nodeid=3,cpus=3,memdev=ram3                \
     -numa node,nodeid=4,cpus=4,memdev=ram4                \
     -numa node,nodeid=5,cpus=5,memdev=ram5                \


This case works fine to Linux guest.

[    0.024505] CPU-0: 36,56,0,-1 thread=0  core=0-2  cluster=0 llc=0            
// parsed from ACPI PPTT
[    0.024509] CPU-1: 36,96,1,-1 thread=1  core=0-2  cluster=1 llc=1
[    0.024511] CPU-2: 36,136,2,-1 thread=2  core=0-2  cluster=2 llc=2
[    0.024512] CPU-3: 176,196,3,-1 thread=3  core=3-5  cluster=3 llc=3
[    0.024514] CPU-4: 176,236,4,-1 thread=4  core=3-5  cluster=4 llc=4
[    0.024515] CPU-5: 176,276,5,-1 thread=5  core=3-5  cluster=5 llc=5
[    0.024518] CPU-0: SMT=0  CLUSTER=0  MULTICORE=0  DIE=0  CPU-OF-NODE=0      
// Seen by scheduling domain
[    0.024519] CPU-1: SMT=1  CLUSTER=1  MULTICORE=1  DIE=1  CPU-OF-NODE=1
[    0.024521] CPU-2: SMT=2  CLUSTER=2  MULTICORE=2  DIE=2  CPU-OF-NODE=2
[    0.024522] CPU-3: SMT=3  CLUSTER=3  MULTICORE=3  DIE=3  CPU-OF-NODE=3
[    0.024524] CPU-4: SMT=4  CLUSTER=4  MULTICORE=4  DIE=4  CPU-OF-NODE=4
[    0.024525] CPU-5: SMT=5  CLUSTER=5  MULTICORE=5  DIE=5  CPU-OF-NODE=5


In both cases the NUMA split is aligned with a given level
in the topology, which was not the case with your example.

Rejecting these feels overly strict to me, and may risk breaking
existing valid deployments, unless we can demonstrate those
scenarios were unambiguously already broken ?

If there was something in the hardware specs that requires
this, then it is more valid to do, than if it is merely an
specific guest kernel limitation that might be fixed any day.


Yes, I agree that it's strict to have socket-to-NUMA boundary. However,
it sounds not sensible to split CPUs in one cluster to differnet NUMA
nodes, or to split CPUs in one core to different NUMA nodes in the baremetal
environment. I think we probably need to prevent these two cases, meaning two
clusters in one socket is still allowed to be associated with different NUMA
nodes.

I fail to get accurate information about the relation among socket/cluster/core
from specs. As I can understand, the CPUs in one core are sharing L2 cache and
cores in one cluster are sharing L3 cache. thread would have its own L1 cache.
L3 cache is usually corresponding to NUMA node. I may be totally wrong here.

Thanks,
Gavin







reply via email to

[Prev in Thread] Current Thread [Next in Thread]