qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v6 32/42] nvme: allow multiple aios per command


From: Klaus Birkelund Jensen
Subject: Re: [PATCH v6 32/42] nvme: allow multiple aios per command
Date: Tue, 31 Mar 2020 07:47:02 +0200

On Mar 25 12:57, Maxim Levitsky wrote:
> On Mon, 2020-03-16 at 07:29 -0700, Klaus Jensen wrote:
> > From: Klaus Jensen <address@hidden>
> > 
> > This refactors how the device issues asynchronous block backend
> > requests. The NvmeRequest now holds a queue of NvmeAIOs that are
> > associated with the command. This allows multiple aios to be issued for
> > a command. Only when all requests have been completed will the device
> > post a completion queue entry.
> > 
> > Because the device is currently guaranteed to only issue a single aio
> > request per command, the benefit is not immediately obvious. But this
> > functionality is required to support metadata, the dataset management
> > command and other features.
> > 
> > Signed-off-by: Klaus Jensen <address@hidden>
> > Signed-off-by: Klaus Jensen <address@hidden>
> > Acked-by: Keith Busch <address@hidden>
> > ---
> >  hw/block/nvme.c       | 377 +++++++++++++++++++++++++++++++-----------
> >  hw/block/nvme.h       | 129 +++++++++++++--
> >  hw/block/trace-events |   6 +
> >  3 files changed, 407 insertions(+), 105 deletions(-)
> > 
> > diff --git a/hw/block/nvme.c b/hw/block/nvme.c
> > index 0d2b5b45b0c5..817384e3b1a9 100644
> > --- a/hw/block/nvme.c
> > +++ b/hw/block/nvme.c
> > @@ -373,6 +374,99 @@ static uint16_t nvme_map(NvmeCtrl *n, NvmeCmd *cmd, 
> > QEMUSGList *qsg,
> >      return nvme_map_prp(n, qsg, iov, prp1, prp2, len, req);
> >  }
> >  
> > +static void nvme_aio_destroy(NvmeAIO *aio)
> > +{
> > +    g_free(aio);
> > +}
> > +
> > +static inline void nvme_req_register_aio(NvmeRequest *req, NvmeAIO *aio,
> I guess I'll call this nvme_req_add_aio,
> or nvme_add_aio_to_reg.
> Thoughts?
> Also you can leave this as is, but add a comment on top explaining this
> 

nvme_req_add_aio it is :) And comment added.

> > +                                         NvmeAIOOp opc)
> > +{
> > +    aio->opc = opc;
> > +
> > +    trace_nvme_dev_req_register_aio(nvme_cid(req), aio, blk_name(aio->blk),
> > +                                    aio->offset, aio->len,
> > +                                    nvme_aio_opc_str(aio), req);
> > +
> > +    if (req) {
> > +        QTAILQ_INSERT_TAIL(&req->aio_tailq, aio, tailq_entry);
> > +    }
> > +}
> > +
> > +static void nvme_submit_aio(NvmeAIO *aio)
> OK, this name makes sense
> Also please add a comment on top.

Done.

> > @@ -505,9 +600,11 @@ static inline uint16_t nvme_check_mdts(NvmeCtrl *n, 
> > size_t len,
> >      return NVME_SUCCESS;
> >  }
> >  
> > -static inline uint16_t nvme_check_prinfo(NvmeCtrl *n, NvmeNamespace *ns,
> > -                                         uint16_t ctrl, NvmeRequest *req)
> > +static inline uint16_t nvme_check_prinfo(NvmeCtrl *n, uint16_t ctrl,
> > +                                         NvmeRequest *req)
> >  {
> > +    NvmeNamespace *ns = req->ns;
> > +
> This should go to the patch that added nvme_check_prinfo
> 

Probably killing that patch.

> > @@ -516,10 +613,10 @@ static inline uint16_t nvme_check_prinfo(NvmeCtrl *n, 
> > NvmeNamespace *ns,
> >      return NVME_SUCCESS;
> >  }
> >  
> > -static inline uint16_t nvme_check_bounds(NvmeCtrl *n, NvmeNamespace *ns,
> > -                                         uint64_t slba, uint32_t nlb,
> > -                                         NvmeRequest *req)
> > +static inline uint16_t nvme_check_bounds(NvmeCtrl *n, uint64_t slba,
> > +                                         uint32_t nlb, NvmeRequest *req)
> >  {
> > +    NvmeNamespace *ns = req->ns;
> >      uint64_t nsze = le64_to_cpu(ns->id_ns.nsze);
> This should go to the patch that added nvme_check_bounds as well
> 

We can't really, because the NvmeRequest does not hold a reference to
the namespace as a struct member at that point. This is also an issue
with the nvme_check_prinfo function above.

> >  
> >      if (unlikely(UINT64_MAX - slba < nlb || slba + nlb > nsze)) {
> > @@ -530,55 +627,154 @@ static inline uint16_t nvme_check_bounds(NvmeCtrl 
> > *n, NvmeNamespace *ns,
> >      return NVME_SUCCESS;
> >  }
> >  
> > -static void nvme_rw_cb(void *opaque, int ret)
> > +static uint16_t nvme_check_rw(NvmeCtrl *n, NvmeRequest *req)
> > +{
> > +    NvmeNamespace *ns = req->ns;
> > +    NvmeRwCmd *rw = (NvmeRwCmd *) &req->cmd;
> > +    uint16_t ctrl = le16_to_cpu(rw->control);
> > +    size_t len = req->nlb << nvme_ns_lbads(ns);
> > +    uint16_t status;
> > +
> > +    status = nvme_check_mdts(n, len, req);
> > +    if (status) {
> > +        return status;
> > +    }
> > +
> > +    status = nvme_check_prinfo(n, ctrl, req);
> > +    if (status) {
> > +        return status;
> > +    }
> > +
> > +    status = nvme_check_bounds(n, req->slba, req->nlb, req);
> > +    if (status) {
> > +        return status;
> > +    }
> > +
> > +    return NVME_SUCCESS;
> > +}
> 
> Nitpick: I hate to say it but nvme_check_rw should be in a separate patch as 
> well.
> It will also make diff more readable (when adding a funtion and changing a 
> function
> at the same time, you get a diff between two unrelated things)
> 

Done, but had to do it as a follow up patch.

> >  
> > -static uint16_t nvme_write_zeros(NvmeCtrl *n, NvmeNamespace *ns, NvmeCmd 
> > *cmd,
> > -    NvmeRequest *req)
> > +static uint16_t nvme_write_zeroes(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest 
> > *req)
> Very small nitpick about zeros/zeroes: This should move to some refactoring 
> patch to be honest. 
> 

Done ;)

> 
> The patch is still too large IMHO to review properly and few things can be 
> split from it.
> I tried my best to review it but I might have missed something.
> 

Yeah, I know, but thanks for trying!



reply via email to

[Prev in Thread] Current Thread [Next in Thread]