qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v6 09/42] nvme: add max_ioqpairs device parameter


From: Maxim Levitsky
Subject: Re: [PATCH v6 09/42] nvme: add max_ioqpairs device parameter
Date: Tue, 31 Mar 2020 12:48:02 +0300

On Tue, 2020-03-31 at 07:40 +0200, Klaus Birkelund Jensen wrote:
> On Mar 25 12:39, Maxim Levitsky wrote:
> > On Mon, 2020-03-16 at 07:28 -0700, Klaus Jensen wrote:
> > > From: Klaus Jensen <address@hidden>
> > > 
> > > The num_queues device paramater has a slightly confusing meaning because
> > > it accounts for the admin queue pair which is not really optional.
> > > Secondly, it is really a maximum value of queues allowed.
> > > 
> > > Add a new max_ioqpairs parameter that only accounts for I/O queue pairs,
> > > but keep num_queues for compatibility.
> > > 
> > > Signed-off-by: Klaus Jensen <address@hidden>
> > > ---
> > >  hw/block/nvme.c | 45 ++++++++++++++++++++++++++-------------------
> > >  hw/block/nvme.h |  4 +++-
> > >  2 files changed, 29 insertions(+), 20 deletions(-)
> > > 
> > > diff --git a/hw/block/nvme.c b/hw/block/nvme.c
> > > index 7cf7cf55143e..7dfd8a1a392d 100644
> > > --- a/hw/block/nvme.c
> > > +++ b/hw/block/nvme.c
> > > @@ -1332,9 +1333,15 @@ static void nvme_realize(PCIDevice *pci_dev, Error 
> > > **errp)
> > >      int64_t bs_size;
> > >      uint8_t *pci_conf;
> > >  
> > > -    if (!n->params.num_queues) {
> > > -        error_setg(errp, "num_queues can't be zero");
> > > -        return;
> > > +    if (n->params.num_queues) {
> > > +        warn_report("nvme: num_queues is deprecated; please use 
> > > max_ioqpairs "
> > > +                    "instead");
> > > +
> > > +        n->params.max_ioqpairs = n->params.num_queues - 1;
> > > +    }
> > > +
> > > +    if (!n->params.max_ioqpairs) {
> > > +        error_setg(errp, "max_ioqpairs can't be less than 1");
> > >      }
> > 
> > This is not even a nitpick, but just and idea.
> > 
> > It might be worth it to allow max_ioqpairs=0 to simulate a 'broken'
> > nvme controller. I know that kernel has special handling for such 
> > controllers,
> > which include only creation of the control character device (/dev/nvme*) 
> > through
> > which the user can submit commands to try and 'fix' the controller (by 
> > re-uploading firmware
> > maybe or something like that).
> > 
> > 
> 
> Not sure about the implications of this, so I'll leave that on the TODO
> :) But a controller with no I/O queues is an "Administrative Controller"
> and perfectly legal in NVMe v1.4 AFAIK.
That what I was thinking as well. Keeping this on a TODO list is perfectly fine.

> 
> > >  
> > >      if (!n->conf.blk) {
> > > @@ -1365,19 +1372,19 @@ static void nvme_realize(PCIDevice *pci_dev, 
> > > Error **errp)
> > >      pcie_endpoint_cap_init(pci_dev, 0x80);
> > >  
> > >      n->num_namespaces = 1;
> > > -    n->reg_size = pow2ceil(0x1004 + 2 * (n->params.num_queues + 1) * 4);
> > > +    n->reg_size = pow2ceil(0x1008 + 2 * (n->params.max_ioqpairs) * 4);
> > 
> > I hate to say it, but it looks like this thing (which I mentioned to you in 
> > V5)
> > was pre-existing bug, which is indeed fixed now.
> > In theory such fixes should go to separate patches, but in this case, I 
> > guess it would
> > be too much to ask for it.
> > Maybe mention this in the commit message instead, so that this fix doesn't 
> > stay hidden like that?
> > 
> > 
> 
> I'm convinced now. I have added a preparatory bugfix patch before this
> patch.
Thanks a lot!. 
Sorry for not noticing it before.

> 
> > 
> > Reviewed-by: Maxim Levitsky <address@hidden>
> > 
> > Best regards,
> >     Maxim Levitsky
> > 

Best regards,
        Maxim Levitsky
> 
> 





reply via email to

[Prev in Thread] Current Thread [Next in Thread]