qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 0/4] NUMA: Apply socket-NUMA-node boundary for aarch64 and


From: Andrew Jones
Subject: Re: [PATCH v2 0/4] NUMA: Apply socket-NUMA-node boundary for aarch64 and RiscV machines
Date: Fri, 24 Feb 2023 11:39:11 +0100

On Fri, Feb 24, 2023 at 09:16:39PM +1100, Gavin Shan wrote:
> On 2/24/23 8:26 PM, Daniel Henrique Barboza wrote:
> > On 2/24/23 04:09, Gavin Shan wrote:
> > > On 2/24/23 12:18 AM, Daniel Henrique Barboza wrote:
> > > > On 2/23/23 05:13, Gavin Shan wrote:
> > > > > For arm64 and RiscV architecture, the driver (/base/arch_topology.c) 
> > > > > is
> > > > > used to populate the CPU topology in the Linux guest. It's required 
> > > > > that
> > > > > the CPUs in one socket can't span mutiple NUMA nodes. Otherwise, the 
> > > > > Linux
> > > > > scheduling domain can't be sorted out, as the following warning 
> > > > > message
> > > > > indicates. To avoid the unexpected confusion, this series attempts to
> > > > > rejects such kind of insane configurations.
> > > > > 
> > > > >     -smp 6,maxcpus=6,sockets=2,clusters=1,cores=3,threads=1 \
> > > > >     -numa node,nodeid=0,cpus=0-1,memdev=ram0                \
> > > > >     -numa node,nodeid=1,cpus=2-3,memdev=ram1                \
> > > > >     -numa node,nodeid=2,cpus=4-5,memdev=ram2                \
> > > > 
> > > > 
> > > > And why is this a QEMU problem? This doesn't hurt ACPI.
> > > > 
> > > > Also, this restriction impacts breaks ARM guests in the wild that are 
> > > > running
> > > > non-Linux OSes. I don't see why we should impact use cases that has 
> > > > nothing to
> > > > do with Linux Kernel feelings about sockets - NUMA nodes exclusivity.
> > > > 
> > > 
> > > With above configuration, CPU-0/1/2 are put into socket-0-cluster-0 while 
> > > CPU-3/4/5
> > > are put into socket-1-cluster-0, meaning CPU-2/3 belong to different 
> > > socket and
> > > cluster. However, CPU-2/3 are associated with NUMA node-1. In summary, 
> > > multiple
> > > CPUs in different clusters and sockets have been associated with one NUMA 
> > > node.
> > > 
> > > If I'm correct, the configuration isn't sensible in a baremetal 
> > > environment and
> > > same Linux kernel is supposed to work well for baremetal and virtualized 
> > > machine.
> > > So I think QEMU needs to emulate the topology as much as we can to match 
> > > with the
> > > baremetal environment. It's the reason why I think it's a QEMU problem 
> > > even it
> > > doesn't hurt ACPI. As I said in the reply to Daniel P. Berrangé 
> > > <berrange@redhat.com>
> > > in another thread, we may need to gurantee that the CPUs in one cluster 
> > > can't be
> > > split to multiple NUMA nodes, which matches with the baremetal 
> > > environment, as I
> > > can understand.
> > > 
> > > Right, the restriction to have socket-NUMA-node or cluster-NUMA-node 
> > > boundary will
> > > definitely break the configurations running in the wild.
> > 
> > 
> > What about a warning? If the user attempts to use an exotic NUMA 
> > configuration
> > like the one you mentioned we can print something like:
> > 
> > "Warning: NUMA topologies where a socket belongs to multiple NUMA nodes can 
> > cause OSes like Linux to misbehave"
> > 
> > This would inform the user what might be going wrong in case Linux is 
> > crashing/error
> > out on them and then the user is free to fix their topology (or the 
> > kernel). And
> > at the same time we wouldn't break existing stuff that happens to be 
> > working.
> > 
> > 
> 
> Yes, I think a warning message is more appropriate, so that users can fix 
> their
> irregular configurations and the existing configurations aren't disconnected.
> It would be nice to get the agreements from Daniel P. Berrangé and Drew, 
> before
> I'm going to change the code and post next revision.
>

If there's a concern that this will break non-Linux OSes on arm, then, at
most, the change needs to be tied to the next machine type version, and
possibly it can never be made for arm. riscv is OK, since it currently
ignores smp parameters anyway. It currently derives the number of sockets
from the number of NUMA nodes, using a 1:1 mapping. When smp parameters
are eventually implemented for riscv, then this can be revisited.

Also, it sounds like not only has the rationale for this series been
changed to "platform choice", but also that the cluster <-> numa node
mapping should be 1:1, not the socket <-> numa node mapping. If that's
the case, then the series probably needs to be reworked for that.

Thanks,
drew



reply via email to

[Prev in Thread] Current Thread [Next in Thread]