[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [RFC PATCH v3 12/24] spapr: CPU hotplug support
From: |
David Gibson |
Subject: |
Re: [Qemu-devel] [RFC PATCH v3 12/24] spapr: CPU hotplug support |
Date: |
Thu, 7 May 2015 11:03:47 +1000 |
User-agent: |
Mutt/1.5.23 (2014-03-12) |
On Wed, May 06, 2015 at 11:44:20AM +0530, Bharata B Rao wrote:
> On Tue, May 05, 2015 at 04:59:51PM +1000, David Gibson wrote:
> > On Fri, Apr 24, 2015 at 12:17:34PM +0530, Bharata B Rao wrote:
> > > Support CPU hotplug via device-add command. Set up device tree
> > > entries for the hotplugged CPU core and use the exising EPOW event
> > > infrastructure to send CPU hotplug notification to the guest.
> > >
> > > Also support cold plugged CPUs that are specified by -device option
> > > on cmdline.
> > >
> > > Signed-off-by: Bharata B Rao <address@hidden>
> > > ---
> > > hw/ppc/spapr.c | 129
> > > ++++++++++++++++++++++++++++++++++++++++++++++++++
> > > hw/ppc/spapr_events.c | 8 ++--
> > > hw/ppc/spapr_rtas.c | 11 +++++
> > > 3 files changed, 145 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> > > index b526b7d..9b0701c 100644
> > > --- a/hw/ppc/spapr.c
> > > +++ b/hw/ppc/spapr.c
> > > @@ -33,6 +33,7 @@
> > > #include "sysemu/block-backend.h"
> > > #include "sysemu/cpus.h"
> > > #include "sysemu/kvm.h"
> > > +#include "sysemu/device_tree.h"
> > > #include "kvm_ppc.h"
> > > #include "mmu-hash64.h"
> > > #include "qom/cpu.h"
> > > @@ -662,6 +663,17 @@ static void spapr_populate_cpu_dt(CPUState *cs, void
> > > *fdt, int offset)
> > > unsigned sockets = opts ? qemu_opt_get_number(opts, "sockets", 0) :
> > > 0;
> > > uint32_t cpus_per_socket = sockets ? (smp_cpus / sockets) : 1;
> > > uint32_t pft_size_prop[] = {0, cpu_to_be32(spapr->htab_shift)};
> > > + sPAPRDRConnector *drc;
> > > + sPAPRDRConnectorClass *drck;
> > > + int drc_index;
> > > +
> > > + if (spapr->dr_cpu_enabled) {
> > > + drc = spapr_dr_connector_by_id(SPAPR_DR_CONNECTOR_TYPE_CPU,
> > > index);
> > > + g_assert(drc);
> > > + drck = SPAPR_DR_CONNECTOR_GET_CLASS(drc);
> > > + drc_index = drck->get_index(drc);
> > > + _FDT((fdt_setprop_cell(fdt, offset, "ibm,my-drc-index",
> > > drc_index)));
> > > + }
> > >
> > > _FDT((fdt_setprop_cell(fdt, offset, "reg", index)));
> > > _FDT((fdt_setprop_string(fdt, offset, "device_type", "cpu")));
> > > @@ -1850,6 +1862,114 @@ static void spapr_nmi(NMIState *n, int cpu_index,
> > > Error **errp)
> > > }
> > > }
> > >
> > > +static void *spapr_populate_hotplug_cpu_dt(DeviceState *dev, CPUState
> > > *cs,
> > > + int *fdt_offset)
> > > +{
> > > + PowerPCCPU *cpu = POWERPC_CPU(cs);
> > > + DeviceClass *dc = DEVICE_GET_CLASS(cs);
> > > + int id = ppc_get_vcpu_dt_id(cpu);
> > > + void *fdt;
> > > + int offset, fdt_size;
> > > + char *nodename;
> > > +
> > > + fdt = create_device_tree(&fdt_size);
> > > + nodename = g_strdup_printf("address@hidden", dc->fw_name, id);
> > > + offset = fdt_add_subnode(fdt, 0, nodename);
> > > +
> > > + spapr_populate_cpu_dt(cs, fdt, offset);
> > > + g_free(nodename);
> > > +
> > > + *fdt_offset = offset;
> > > + return fdt;
> > > +}
> > > +
> > > +static void spapr_cpu_plug(HotplugHandler *hotplug_dev, DeviceState *dev,
> > > + Error **errp)
> > > +{
> > > + CPUState *cs = CPU(dev);
> > > + PowerPCCPU *cpu = POWERPC_CPU(cs);
> > > + int id = ppc_get_vcpu_dt_id(cpu);
> > > + sPAPRDRConnector *drc =
> > > + spapr_dr_connector_by_id(SPAPR_DR_CONNECTOR_TYPE_CPU, id);
> > > + sPAPRDRConnectorClass *drck;
> > > + int smt = kvmppc_smt_threads();
> > > + Error *local_err = NULL;
> > > + void *fdt = NULL;
> > > + int i, fdt_offset = 0;
> > > +
> > > + /* Set NUMA node for the added CPUs */
> > > + for (i = 0; i < nb_numa_nodes; i++) {
> > > + if (test_bit(cs->cpu_index, numa_info[i].node_cpu)) {
> > > + cs->numa_node = i;
> > > + break;
> > > + }
> > > + }
> > > +
> > > + /*
> > > + * SMT threads return from here, only main thread (core) will
> > > + * continue and signal hotplug event to the guest.
> > > + */
> > > + if ((id % smt) != 0) {
> > > + return;
> > > + }
> >
> > Couldn't you avoid this by attaching this call to the core device,
> > rather than the individual vcpu thread objects?
>
> Adding a socket device will result in realize call for that device.
> Socket realizefn will call core realizefn and core realizefn will call
> thread (or CPU) realizefn. device_set_realized() will call ->plug handler
> for all these devices (socket, cores and threads) and that's how we
> end up here even for threads.
>
> This will be same when I get rid of socket abstraction and hot plug
> cores for the same reason as above.
>
> And calling ->plug handler in the context of threads is required
> to initialize board specific CPU bits for the CPU thread that is being
> realized.
Right, but can't you do the thread specific initialization from the
thread hotplug path, and the core specific initialization (basically
everything below this if) from the core hotplug path?
>
> >
> >
> > > + if (!spapr->dr_cpu_enabled) {
> > > + /*
> > > + * This is a cold plugged CPU but the machine doesn't support
> > > + * DR. So skip the hotplug path ensuring that the CPU is brought
> > > + * up online with out an associated DR connector.
> > > + */
> > > + return;
> > > + }
> > > +
> > > + g_assert(drc);
> > > +
> > > + /*
> > > + * Setup CPU DT entries only for hotplugged CPUs. For boot time or
> > > + * coldplugged CPUs DT entries are setup in spapr_finalize_fdt().
> > > + */
> > > + if (dev->hotplugged) {
> > > + fdt = spapr_populate_hotplug_cpu_dt(dev, cs, &fdt_offset);
> > > + }
> > > +
> > > + drck = SPAPR_DR_CONNECTOR_GET_CLASS(drc);
> > > + drck->attach(drc, dev, fdt, fdt_offset, !dev->hotplugged,
> > > &local_err);
> > > + if (local_err) {
> > > + g_free(fdt);
> > > + error_propagate(errp, local_err);
> > > + return;
> > > + }
> > > +
> > > + /*
> > > + * We send hotplug notification interrupt to the guest only in case
> > > + * of hotplugged CPUs.
> > > + */
> > > + if (dev->hotplugged) {
> > > + spapr_hotplug_req_add_event(drc);
> > > + } else {
> > > + /*
> > > + * HACK to support removal of hotplugged CPU after VM migration:
> > > + *
> > > + * Since we want to be able to hot-remove those coldplugged CPUs
> > > + * started at boot time using -device option at the target VM,
> > > we set
> > > + * the right allocation_state and isolation_state for them,
> > > which for
> > > + * the hotplugged CPUs would be set via RTAS calls done from the
> > > + * guest during hotplug.
> > > + *
> > > + * This allows the coldplugged CPUs started using -device option
> > > to
> > > + * have the right isolation and allocation states as expected by
> > > the
> > > + * CPU hot removal code.
> > > + *
> > > + * This hack will be removed once we have DRC states migrated as
> > > part
> > > + * of VM migration.
> > > + */
> > > + drck->set_allocation_state(drc,
> > > SPAPR_DR_ALLOCATION_STATE_USABLE);
> > > + drck->set_isolation_state(drc,
> > > SPAPR_DR_ISOLATION_STATE_UNISOLATED);
> > > + }
> > > +
> > > + return;
> > > +}
> > > +
> > > static void spapr_machine_device_plug(HotplugHandler *hotplug_dev,
> > > DeviceState *dev, Error **errp)
> > > {
> > > @@ -1858,6 +1978,15 @@ static void
> > > spapr_machine_device_plug(HotplugHandler *hotplug_dev,
> > > PowerPCCPU *cpu = POWERPC_CPU(cs);
> > >
> > > spapr_cpu_init(cpu);
> > > + spapr_cpu_reset(cpu);
> >
> > I'm a little surprised these get called here, rather than in the
> > creation / realize path of the core qdev.
>
> These two routines (spapr_cpu_init and spapr_cpu_reset) are sPAPR or
> board specific and I can't even have them as part of ppc_cpu_realizefn.
Oh, yes of course. Sorry.
> We had discussed this earlier at
> http://lists.gnu.org/archive/html/qemu-devel/2015-02/msg04399.html
>
> Regards,
> Bharata.
>
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
pgpRucf7jweKP.pgp
Description: PGP signature