qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 12/15] hw/nvme: Initialize capability structures for prima


From: Łukasz Gieryk
Subject: Re: [PATCH v2 12/15] hw/nvme: Initialize capability structures for primary/secondary controllers
Date: Wed, 24 Nov 2021 15:26:30 +0100
User-agent: Mutt/1.9.4 (2018-02-28)

On Wed, Nov 24, 2021 at 09:04:31AM +0100, Klaus Jensen wrote:
> On Nov 16 16:34, Łukasz Gieryk wrote:
> > With four new properties:
> >  - sriov_v{i,q}_flexible,
> >  - sriov_max_v{i,q}_per_vf,
> > one can configure the number of available flexible resources, as well as
> > the limits. The primary and secondary controller capability structures
> > are initialized accordingly.
> > 
> > Since the number of available queues (interrupts) now varies between
> > VF/PF, BAR size calculation is also adjusted.
> > 
> > Signed-off-by: Łukasz Gieryk <lukasz.gieryk@linux.intel.com>
> > ---
> >  hw/nvme/ctrl.c       | 138 ++++++++++++++++++++++++++++++++++++++++---
> >  hw/nvme/nvme.h       |   4 ++
> >  include/block/nvme.h |   5 ++
> >  3 files changed, 140 insertions(+), 7 deletions(-)
> > 
> > diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c
> > index f8f5dfe204..f589ffde59 100644
> > --- a/hw/nvme/ctrl.c
> > +++ b/hw/nvme/ctrl.c
> > @@ -6358,13 +6444,40 @@ static void nvme_init_state(NvmeCtrl *n)
> >      n->starttime_ms = qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL);
> >      n->aer_reqs = g_new0(NvmeRequest *, n->params.aerl + 1);
> >  
> > -    list->numcntl = cpu_to_le16(n->params.sriov_max_vfs);
> > -    for (i = 0; i < n->params.sriov_max_vfs; i++) {
> > +    list->numcntl = cpu_to_le16(max_vfs);
> > +    for (i = 0; i < max_vfs; i++) {
> >          sctrl = &list->sec[i];
> >          sctrl->pcid = cpu_to_le16(n->cntlid);
> >      }
> >  
> >      cap->cntlid = cpu_to_le16(n->cntlid);
> > +    cap->crt = NVME_CRT_VQ | NVME_CRT_VI;
> > +
> > +    if (pci_is_vf(&n->parent_obj)) {
> > +        cap->vqprt = cpu_to_le16(1 + n->conf_ioqpairs);
> > +    } else {
> > +        cap->vqprt = cpu_to_le16(1 + n->params.max_ioqpairs -
> > +                                 n->params.sriov_vq_flexible);
> > +        cap->vqfrt = cpu_to_le32(n->params.sriov_vq_flexible);
> > +        cap->vqrfap = cap->vqfrt;
> > +        cap->vqgran = cpu_to_le16(NVME_VF_RES_GRANULARITY);
> > +        cap->vqfrsm = n->params.sriov_max_vq_per_vf ?
> > +                        cpu_to_le16(n->params.sriov_max_vq_per_vf) :
> > +                        cap->vqprt;
> 
> That this defaults to VQPRT doesn't seem right. It should default to
> VQFRT. Does not make sense to report a maximum number of assignable
> flexible resources that are bigger than the number of flexible resources
> available.

I’ve explained in on of v1 threads why I think using the current default
is better than VQPRT.

What you’ve noticed is indeed an inconvenience, but it’s – at least in
my opinion – part of the design. What matters is the current number of
unassigned flexible resources. It may be lower than VQFRSM due to
multiple reasons:
 1) resources are bound to PF, 
 2) resources are bound to other VFs,
 3) resources simply don’t exist (not baked in silicone: VQFRT < VQFRSM).

If 1) and 2) are allowed to happen, and the user must be aware of that,
then why 3) shouldn’t?




reply via email to

[Prev in Thread] Current Thread [Next in Thread]