qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC 0/6] enable numa configuration before machine_init


From: Eduardo Habkost
Subject: Re: [Qemu-devel] [RFC 0/6] enable numa configuration before machine_init() from HMP/QMP
Date: Wed, 18 Oct 2017 10:59:11 -0200
User-agent: Mutt/1.9.0 (2017-09-02)

On Tue, Oct 17, 2017 at 06:18:59PM +0200, Igor Mammedov wrote:
> On Tue, 17 Oct 2017 17:09:26 +0100
> "Daniel P. Berrange" <address@hidden> wrote:
> 
> > On Tue, Oct 17, 2017 at 06:06:35PM +0200, Igor Mammedov wrote:
> > > On Tue, 17 Oct 2017 16:07:59 +0100
> > > "Daniel P. Berrange" <address@hidden> wrote:
> > >   
> > > > On Tue, Oct 17, 2017 at 09:27:02AM +0200, Igor Mammedov wrote:  
> > > > > On Mon, 16 Oct 2017 17:36:36 +0100
> > > > > "Daniel P. Berrange" <address@hidden> wrote:
> > > > >     
> > > > > > On Mon, Oct 16, 2017 at 06:22:50PM +0200, Igor Mammedov wrote:    
> > > > > > > Series allows to configure NUMA mapping at runtime using QMP/HMP
> > > > > > > interface. For that to happen it introduces a new '-paused' CLI 
> > > > > > > option
> > > > > > > which allows to pause QEMU before machine_init() is run and
> > > > > > > adds new set-numa-node HMP/QMP commands which in conjuction with
> > > > > > > info hotpluggable-cpus/query-hotpluggable-cpus allow to configure
> > > > > > > NUMA mapping for cpus.      
> > > > > > 
> > > > > > What's the problem we're seeking solve here compared to what we 
> > > > > > currently
> > > > > > do for NUMA configuration ?    
> > > > > From RHBZ1382425
> > > > > "
> > > > > Current -numa CLI interface is quite limited in terms that allow map
> > > > > CPUs to NUMA nodes as it requires to provide cpu_index values which 
> > > > > are non obvious and depend on machine/arch. As result libvirt has to
> > > > > assume/re-implement cpu_index allocation logic to provide valid 
> > > > > values for -numa cpus=... QEMU CLI option.    
> > > > 
> > > > In broad terms, this problem applies to every device / object libvirt
> > > > asks QEMU to create. For everything else libvirt is able to assign a
> > > > "id" string, which is can then use to identify the thing later. The
> > > > CPU stuff is different because libvirt isn't able to provide 'id'
> > > > strings for each CPU - QEMU generates a psuedo-id internally which
> > > > libvirt has to infer. The latter is the same problem we had with
> > > > devices before '-device' was introduced allowing 'id' naming.
> > > > 
> > > > IMHO we should take the same approach with CPUs and start modelling 
> > > > the individual CPUs as something we can explicitly create with -object
> > > > or -device. That way libvirt can assign names and does not have to 
> > > > care about CPU index values, and it all works just the same way as
> > > > any other devices / object we create
> > > > 
> > > > ie instead of:
> > > > 
> > > >   -smp 8,sockets=4,cores=2,threads=1
> > > >   -numa node,nodeid=0,cpus=0-3
> > > >   -numa node,nodeid=1,cpus=4-7
> > > > 
> > > > we could do:
> > > > 
> > > >   -object numa-node,id=numa0
> > > >   -object numa-node,id=numa1
> > > >   -object cpu,id=cpu0,node=numa0,socket=0,core=0,thread=0
> > > >   -object cpu,id=cpu1,node=numa0,socket=0,core=1,thread=0
> > > >   -object cpu,id=cpu2,node=numa0,socket=1,core=0,thread=0
> > > >   -object cpu,id=cpu3,node=numa0,socket=1,core=1,thread=0
> > > >   -object cpu,id=cpu4,node=numa1,socket=2,core=0,thread=0
> > > >   -object cpu,id=cpu5,node=numa1,socket=2,core=1,thread=0
> > > >   -object cpu,id=cpu6,node=numa1,socket=3,core=0,thread=0
> > > >   -object cpu,id=cpu7,node=numa1,socket=3,core=1,thread=0  
> > > the follow up question would be where do "socket=3,core=1,thread=0"
> > > come from, currently these options are the function of
> > > (-M foo -smp ...) and can be queried vi query-hotpluggble-cpus at
> > > runtime after qemu parses -M and -smp options.  
> > 

Also, note that in the case of NUMA, having identifiers for CPU
objects themselves won't be enough. NUMA settings need
identifiers for CPU slots (even if they are still empty), and
those slots are provided by the machine, not created by the user.


> > The sockets/cores/threads topology of CPUs is something that comes from
> > the libvirt guest XML config
> in this case things for libvirt to implement would be to know following 
> details:
>    1: which machine/machine version support which set of attributes
>    2: valid values for these properties depending on machine/machine 
> version/cpu type

The big assumption in this series is that libvirt doesn't know in
advance how the possible slots for CPUs will look like on each
machine-type, and need to query them using
query-hotpluggable-cpus.

But if this assumption was really true, it would be impossible
for the user to even decide how the NUMA topology will look like,
wouldn't it?

Igor, are you able to give one example of how the user input
(libvirt XML) for configuring NUMA CPU binding could look like if
the user didn't know yet what the available sockets/cores/threads
are?

-- 
Eduardo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]