qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH V3 7/8] hw/block/nvme: support changed namespace asyncrohous


From: Klaus Jensen
Subject: Re: [PATCH V3 7/8] hw/block/nvme: support changed namespace asyncrohous event
Date: Tue, 2 Mar 2021 10:28:20 +0100

On Mar  2 18:26, Minwoo Im wrote:
> On 21-03-01 06:56:02, Klaus Jensen wrote:
> > On Mar  1 01:10, Minwoo Im wrote:
> > > If namespace inventory is changed due to some reasons (e.g., namespace
> > > attachment/detachment), controller can send out event notifier to the
> > > host to manage namespaces.
> > > 
> > > This patch sends out the AEN to the host after either attach or detach
> > > namespaces from controllers.  To support clear of the event from the
> > > controller, this patch also implemented Get Log Page command for Changed
> > > Namespace List log type.  To return namespace id list through the
> > > command, when namespace inventory is updated, id is added to the
> > > per-controller list (changed_ns_list).
> > > 
> > > To indicate the support of this async event, this patch set
> > > OAES(Optional Asynchronous Events Supported) in Identify Controller data
> > > structure.
> > > 
> > > Signed-off-by: Minwoo Im <minwoo.im.dev@gmail.com>
> > > ---
> > >  hw/block/nvme.c      | 44 ++++++++++++++++++++++++++++++++++++++++++++
> > >  hw/block/nvme.h      |  7 +++++++
> > >  include/block/nvme.h |  7 +++++++
> > >  3 files changed, 58 insertions(+)
> > > 
> > > diff --git a/hw/block/nvme.c b/hw/block/nvme.c
> > > index 68c2e63d9412..fc06f806e58e 100644
> > > --- a/hw/block/nvme.c
> > > +++ b/hw/block/nvme.c
> > > @@ -2980,6 +2980,32 @@ static uint16_t nvme_error_info(NvmeCtrl *n, 
> > > uint8_t rae, uint32_t buf_len,
> > >                      DMA_DIRECTION_FROM_DEVICE, req);
> > >  }
> > >  
> > > +static uint16_t nvme_changed_nslist(NvmeCtrl *n, uint8_t rae, uint32_t 
> > > buf_len,
> > > +                                    uint64_t off, NvmeRequest *req)
> > > +{
> > > +    uint32_t nslist[1024];
> > > +    uint32_t trans_len;
> > > +    NvmeChangedNs *ns, *next;
> > > +    int i = 0;
> > > +
> > > +    memset(nslist, 0x0, sizeof(nslist));
> > > +    trans_len = MIN(sizeof(nslist) - off, buf_len);
> > > +
> > > +    QTAILQ_FOREACH_SAFE(ns, &n->changed_ns_list, entry, next) {
> > > +        nslist[i++] = ns->nsid;
> > > +
> > > +        QTAILQ_REMOVE(&n->changed_ns_list, ns, entry);
> > > +        g_free(ns);
> > > +    }
> > > +
> > > +    if (!rae) {
> > > +        nvme_clear_events(n, NVME_AER_TYPE_NOTICE);
> > > +    }
> > > +
> > > +    return nvme_dma(n, ((uint8_t *)nslist) + off, trans_len,
> > > +                    DMA_DIRECTION_FROM_DEVICE, req);
> > > +}
> > > +
> > >  static uint16_t nvme_cmd_effects(NvmeCtrl *n, uint8_t csi, uint32_t 
> > > buf_len,
> > >                                   uint64_t off, NvmeRequest *req)
> > >  {
> > > @@ -3064,6 +3090,8 @@ static uint16_t nvme_get_log(NvmeCtrl *n, 
> > > NvmeRequest *req)
> > >          return nvme_smart_info(n, rae, len, off, req);
> > >      case NVME_LOG_FW_SLOT_INFO:
> > >          return nvme_fw_log_info(n, len, off, req);
> > > +    case NVME_LOG_CHANGED_NSLIST:
> > > +        return nvme_changed_nslist(n, rae, len, off, req);
> > >      case NVME_LOG_CMD_EFFECTS:
> > >          return nvme_cmd_effects(n, csi, len, off, req);
> > >      default:
> > > @@ -3882,6 +3910,7 @@ static uint16_t nvme_ns_attachment(NvmeCtrl *n, 
> > > NvmeRequest *req)
> > >      uint16_t *ids = &list[1];
> > >      uint16_t ret;
> > >      int i;
> > > +    NvmeChangedNs *changed_nsid;
> > >  
> > >      trace_pci_nvme_ns_attachment(nvme_cid(req), dw10 & 0xf);
> > >  
> > > @@ -3920,6 +3949,18 @@ static uint16_t nvme_ns_attachment(NvmeCtrl *n, 
> > > NvmeRequest *req)
> > >  
> > >              nvme_ns_detach(ctrl, ns);
> > >          }
> > > +
> > > +        /*
> > > +         * Add namespace id to the changed namespace id list for event 
> > > clearing
> > > +         * via Get Log Page command.
> > > +         */
> > > +        changed_nsid = g_new(NvmeChangedNs, 1);
> > > +        changed_nsid->nsid = nsid;
> > > +        QTAILQ_INSERT_TAIL(&ctrl->changed_ns_list, changed_nsid, entry);
> > > +
> > > +        nvme_enqueue_event(ctrl, NVME_AER_TYPE_NOTICE,
> > > +                           NVME_AER_INFO_NOTICE_NS_ATTR_CHANGED,
> > > +                           NVME_LOG_CHANGED_NSLIST);
> > >      }
> > 
> > If one just keeps attaching/detaching we end up with more than 1024
> > entries here and go out of bounds in nvme_changed_nslist.
> > 
> > How about having the QTAILQ_ENTRY directly on the NvmeNamespace struct
> > and use QTAILQ_IN_USE to check if the namespace is already in the list?
> 
> QTAILQ_IN_USE might be tough to represent relationship between
> controller and namespace itself.  So, I will work on this with standard
> bitmap rather than the list.  I think bitmap will be easier to represent
> the relationship.

OK, sounds reasonable!

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]