qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH 3/4] spapr: Add NVDIMM device support


From: David Gibson
Subject: Re: [Qemu-devel] [RFC PATCH 3/4] spapr: Add NVDIMM device support
Date: Wed, 27 Feb 2019 15:27:30 +1100
User-agent: Mutt/1.11.3 (2019-02-01)

On Mon, Feb 18, 2019 at 09:45:13PM +0530, Shivaprasad G Bhat wrote:
> 
> 
> On 02/18/2019 04:32 AM, David Gibson wrote:
> > On Fri, Feb 15, 2019 at 04:41:09PM +0530, Shivaprasad G Bhat wrote:
> > > Thanks for the comments David. Please find my replies inline..
[snip]
> > > > > +
> > > > > +    qemu_uuid_unparse(&uuid, buf);
> > > > > +    _FDT((fdt_setprop_string(fdt, offset, "ibm,unit-guid", buf)));
> > > > > +
> > > > > +    _FDT((fdt_setprop_cell(fdt, offset, "ibm,my-drc-index", 
> > > > > drc_idx)));
> > > > > +
> > > > > +    /*NB : What it should be? */
> > > > > +    _FDT(fdt_setprop_cell(fdt, offset, "ibm,latency-attribute", 
> > > > > 828));
> > > > > +
> > > > > +    _FDT((fdt_setprop_u64(fdt, offset, "ibm,block-size",
> > > > > +                          SPAPR_MINIMUM_SCM_BLOCK_SIZE)));
> > > > > +    _FDT((fdt_setprop_u64(fdt, offset, "ibm,number-of-blocks",
> > > > > +                          size / SPAPR_MINIMUM_SCM_BLOCK_SIZE)));
> > > > > +    _FDT((fdt_setprop_cell(fdt, offset, "ibm,metadata-size", 
> > > > > label_size)));
> > > > > +
> > > > > +    return offset;
> > > > > +}
> > > > > +
> > > > > +static void spapr_add_nvdimm(DeviceState *dev, uint64_t addr,
> > > > > +                             uint64_t size, uint32_t node,
> > > > > +                             Error **errp)
> > > > > +{
> > > > > +    sPAPRMachineState *spapr = 
> > > > > SPAPR_MACHINE(qdev_get_hotplug_handler(dev));
> > > > > +    sPAPRDRConnector *drc;
> > > > > +    bool hotplugged = spapr_drc_hotplugged(dev);
> > > > > +    NVDIMMDevice *nvdimm = NVDIMM(OBJECT(dev));
> > > > > +    void *fdt;
> > > > > +    int fdt_offset, fdt_size;
> > > > > +    Error *local_err = NULL;
> > > > > +
> > > > > +    spapr_dr_connector_new(OBJECT(spapr), TYPE_SPAPR_DRC_PMEM,
> > > > > +                           addr / SPAPR_MINIMUM_SCM_BLOCK_SIZE);
> > > > > +    drc = spapr_drc_by_id(TYPE_SPAPR_DRC_PMEM,
> > > > > +                          addr / SPAPR_MINIMUM_SCM_BLOCK_SIZE);
> > > > > +    g_assert(drc);
> > > > Creating the DRC in the hotplug path looks bogus.  Generally the DRC
> > > > has to exist before you can even attempt to plug the device.
> > > We dont really know how many DRC to create. Unlike memory hotplug
> > > where we know how many LMBs are required to fit till the maxmem, in this
> > > case we dont know how many NVDIMM devicesĀ  guest can have. That is the
> > > reason I am creating the DRC on demand. I'll see if it is possible to
> > > address this
> > > by putting a cap on maximum number of NVDIMM devices a guest can have.
> > Urgh, PAPR.  First it specifies a crappy hotplug model that requires
> > zillions of fixed attachment points to be instantiated, then it breaks
> > its own model.
> > 
> > But.. I still don't really understand how this works.
> > 
> > a) How does the guest know the DRC index to use for the new NVDIMM?
> >     Generally that comes from the device tree, but the guest doesn't
> >     get new device tree information until it calls configure-connector
> >     for which it needs the DRC index.
> The DRC is passed in the device tree blob passed as payload of hotplug
> interrupt

Um.. there is no device tree blob as paylod of a hotplug interrupt.
The guest only gets device tree information when it makes
configure-connector calls.

I see that there is a drc identifier field though, so I guess you're
getting the DRC from that.  In existing cases the guest looks that up
in the *existing* device tree to find infomation about that DRC.  I
guess in the case of NVDIMMs here it doesn't need any more info.

> from which the guest picks the DRC index and makes the subsequent calls.
> > b) AFAICT, NVDIMMs would also require HPT space, much like regular
> >     memory would.  PowerVM doesn't have HPT resizing, so surely it must
> >     already have some sort of cap on the amount of NVDIMM space in
> >     order to size the HPT correctly.
> On Power KVM we will enforce the NVDIMM is mapped within the maxmem,
> however the spec allows outside of it. Coming back to the original point of
> creating the DRCs at the hotplug time, we could impose a limit on the
> number of NVDIMM devices that could be hotplugged so that we can
> create the DRCs at the machine init time.

Ah, so NVDIMMs live within the same maxmem limit as regular memory.
Ok, I guess that makes sense.

-- 
David Gibson                    | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
                                | _way_ _around_!
http://www.ozlabs.org/~dgibson

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]