qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 0/4] NUMA: Apply socket-NUMA-node boundary for aarch64 and


From: Daniel P . Berrangé
Subject: Re: [PATCH v2 0/4] NUMA: Apply socket-NUMA-node boundary for aarch64 and RiscV machines
Date: Thu, 23 Feb 2023 12:57:22 +0000
User-agent: Mutt/2.2.9 (2022-11-12)

On Thu, Feb 23, 2023 at 04:13:57PM +0800, Gavin Shan wrote:
> For arm64 and RiscV architecture, the driver (/base/arch_topology.c) is
> used to populate the CPU topology in the Linux guest. It's required that
> the CPUs in one socket can't span mutiple NUMA nodes. Otherwise, the Linux
> scheduling domain can't be sorted out, as the following warning message
> indicates. To avoid the unexpected confusion, this series attempts to
> rejects such kind of insane configurations.
> 
>    -smp 6,maxcpus=6,sockets=2,clusters=1,cores=3,threads=1 \
>    -numa node,nodeid=0,cpus=0-1,memdev=ram0                \
>    -numa node,nodeid=1,cpus=2-3,memdev=ram1                \
>    -numa node,nodeid=2,cpus=4-5,memdev=ram2                \

This is somewhat odd as a config, because core 2 is in socket 0
and core 3 is in socket 1, so it wouldn't make much conceptual
sense to have them in the same NUMA node, while other cores within
the same socket are in different NUMA nodes. Basically the split
of NUMA nodes is not aligned with any level in the topology.

This series, however, also rejects configurations that I would
normally consider to be reasonable. I've not tested linux kernel
behaviour though, but as a user I would expect to be able to
ask for:

    -smp 6,maxcpus=6,sockets=2,clusters=1,cores=3,threads=1 \
    -numa node,nodeid=0,cpus=0,memdev=ram0                \
    -numa node,nodeid=1,cpus=1,memdev=ram1                \
    -numa node,nodeid=2,cpus=2,memdev=ram2                \
    -numa node,nodeid=3,cpus=3,memdev=ram3                \
    -numa node,nodeid=4,cpus=4,memdev=ram4                \
    -numa node,nodeid=5,cpus=5,memdev=ram5                \

ie, every core gets its own NUMA node

Or to aask for every cluster as a numa node:

    -smp 6,maxcpus=6,sockets=2,clusters=3,cores=1,threads=1 \
    -numa node,nodeid=0,cpus=0,memdev=ram0                \
    -numa node,nodeid=1,cpus=1,memdev=ram1                \
    -numa node,nodeid=2,cpus=2,memdev=ram2                \
    -numa node,nodeid=3,cpus=3,memdev=ram3                \
    -numa node,nodeid=4,cpus=4,memdev=ram4                \
    -numa node,nodeid=5,cpus=5,memdev=ram5                \

In both cases the NUMA split is aligned with a given level
in the topology, which was not the case with your example.

Rejecting these feels overly strict to me, and may risk breaking
existing valid deployments, unless we can demonstrate those
scenarios were unambiguously already broken ?

If there was something in the hardware specs that requires
this, then it is more valid to do, than if it is merely an
specific guest kernel limitation that might be fixed any day.

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|




reply via email to

[Prev in Thread] Current Thread [Next in Thread]