qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 0/4] NUMA: Apply socket-NUMA-node boundary for aarch64 and


From: Gavin Shan
Subject: Re: [PATCH v2 0/4] NUMA: Apply socket-NUMA-node boundary for aarch64 and RiscV machines
Date: Fri, 24 Feb 2023 18:09:23 +1100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.2.0

On 2/24/23 12:18 AM, Daniel Henrique Barboza wrote:
On 2/23/23 05:13, Gavin Shan wrote:
For arm64 and RiscV architecture, the driver (/base/arch_topology.c) is
used to populate the CPU topology in the Linux guest. It's required that
the CPUs in one socket can't span mutiple NUMA nodes. Otherwise, the Linux
scheduling domain can't be sorted out, as the following warning message
indicates. To avoid the unexpected confusion, this series attempts to
rejects such kind of insane configurations.

    -smp 6,maxcpus=6,sockets=2,clusters=1,cores=3,threads=1 \
    -numa node,nodeid=0,cpus=0-1,memdev=ram0                \
    -numa node,nodeid=1,cpus=2-3,memdev=ram1                \
    -numa node,nodeid=2,cpus=4-5,memdev=ram2                \


And why is this a QEMU problem? This doesn't hurt ACPI.

Also, this restriction impacts breaks ARM guests in the wild that are running
non-Linux OSes. I don't see why we should impact use cases that has nothing to
do with Linux Kernel feelings about sockets - NUMA nodes exclusivity.


With above configuration, CPU-0/1/2 are put into socket-0-cluster-0 while 
CPU-3/4/5
are put into socket-1-cluster-0, meaning CPU-2/3 belong to different socket and
cluster. However, CPU-2/3 are associated with NUMA node-1. In summary, multiple
CPUs in different clusters and sockets have been associated with one NUMA node.

If I'm correct, the configuration isn't sensible in a baremetal environment and
same Linux kernel is supposed to work well for baremetal and virtualized 
machine.
So I think QEMU needs to emulate the topology as much as we can to match with 
the
baremetal environment. It's the reason why I think it's a QEMU problem even it
doesn't hurt ACPI. As I said in the reply to Daniel P. Berrangé 
<berrange@redhat.com>
in another thread, we may need to gurantee that the CPUs in one cluster can't be
split to multiple NUMA nodes, which matches with the baremetal environment, as I
can understand.

Right, the restriction to have socket-NUMA-node or cluster-NUMA-node boundary 
will
definitely break the configurations running in the wild.

Thanks,
Gavin

[...]




reply via email to

[Prev in Thread] Current Thread [Next in Thread]