[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PULL 18/33] block/nvme: Simplify nvme_cmd_sync()
From: |
Stefan Hajnoczi |
Subject: |
[PULL 18/33] block/nvme: Simplify nvme_cmd_sync() |
Date: |
Wed, 4 Nov 2020 15:18:13 +0000 |
From: Philippe Mathieu-Daudé <philmd@redhat.com>
As all commands use the ADMIN queue, it is pointless to pass
it as argument each time. Remove the argument, and rename the
function as nvme_admin_cmd_sync() to make this new behavior
clearer.
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20201029093306.1063879-17-philmd@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Tested-by: Eric Auger <eric.auger@redhat.com>
---
block/nvme.c | 19 ++++++++++---------
1 file changed, 10 insertions(+), 9 deletions(-)
diff --git a/block/nvme.c b/block/nvme.c
index eed12f4933..cd875555ca 100644
--- a/block/nvme.c
+++ b/block/nvme.c
@@ -481,16 +481,17 @@ static void nvme_submit_command(NVMeQueuePair *q,
NVMeRequest *req,
qemu_mutex_unlock(&q->lock);
}
-static void nvme_cmd_sync_cb(void *opaque, int ret)
+static void nvme_admin_cmd_sync_cb(void *opaque, int ret)
{
int *pret = opaque;
*pret = ret;
aio_wait_kick();
}
-static int nvme_cmd_sync(BlockDriverState *bs, NVMeQueuePair *q,
- NvmeCmd *cmd)
+static int nvme_admin_cmd_sync(BlockDriverState *bs, NvmeCmd *cmd)
{
+ BDRVNVMeState *s = bs->opaque;
+ NVMeQueuePair *q = s->queues[INDEX_ADMIN];
AioContext *aio_context = bdrv_get_aio_context(bs);
NVMeRequest *req;
int ret = -EINPROGRESS;
@@ -498,7 +499,7 @@ static int nvme_cmd_sync(BlockDriverState *bs,
NVMeQueuePair *q,
if (!req) {
return -EBUSY;
}
- nvme_submit_command(q, req, cmd, nvme_cmd_sync_cb, &ret);
+ nvme_submit_command(q, req, cmd, nvme_admin_cmd_sync_cb, &ret);
AIO_WAIT_WHILE(aio_context, ret == -EINPROGRESS);
return ret;
@@ -535,7 +536,7 @@ static bool nvme_identify(BlockDriverState *bs, int
namespace, Error **errp)
memset(id, 0, sizeof(*id));
cmd.dptr.prp1 = cpu_to_le64(iova);
- if (nvme_cmd_sync(bs, s->queues[INDEX_ADMIN], &cmd)) {
+ if (nvme_admin_cmd_sync(bs, &cmd)) {
error_setg(errp, "Failed to identify controller");
goto out;
}
@@ -558,7 +559,7 @@ static bool nvme_identify(BlockDriverState *bs, int
namespace, Error **errp)
memset(id, 0, sizeof(*id));
cmd.cdw10 = 0;
cmd.nsid = cpu_to_le32(namespace);
- if (nvme_cmd_sync(bs, s->queues[INDEX_ADMIN], &cmd)) {
+ if (nvme_admin_cmd_sync(bs, &cmd)) {
error_setg(errp, "Failed to identify namespace");
goto out;
}
@@ -664,7 +665,7 @@ static bool nvme_add_io_queue(BlockDriverState *bs, Error
**errp)
.cdw10 = cpu_to_le32(((queue_size - 1) << 16) | n),
.cdw11 = cpu_to_le32(NVME_CQ_IEN | NVME_CQ_PC),
};
- if (nvme_cmd_sync(bs, s->queues[INDEX_ADMIN], &cmd)) {
+ if (nvme_admin_cmd_sync(bs, &cmd)) {
error_setg(errp, "Failed to create CQ io queue [%u]", n);
goto out_error;
}
@@ -674,7 +675,7 @@ static bool nvme_add_io_queue(BlockDriverState *bs, Error
**errp)
.cdw10 = cpu_to_le32(((queue_size - 1) << 16) | n),
.cdw11 = cpu_to_le32(NVME_SQ_PC | (n << 16)),
};
- if (nvme_cmd_sync(bs, s->queues[INDEX_ADMIN], &cmd)) {
+ if (nvme_admin_cmd_sync(bs, &cmd)) {
error_setg(errp, "Failed to create SQ io queue [%u]", n);
goto out_error;
}
@@ -887,7 +888,7 @@ static int nvme_enable_disable_write_cache(BlockDriverState
*bs, bool enable,
.cdw11 = cpu_to_le32(enable ? 0x01 : 0x00),
};
- ret = nvme_cmd_sync(bs, s->queues[INDEX_ADMIN], &cmd);
+ ret = nvme_admin_cmd_sync(bs, &cmd);
if (ret) {
error_setg(errp, "Failed to configure NVMe write cache");
}
--
2.28.0
- [PULL 08/33] block/nvme: Improve nvme_free_req_queue_wait() trace information, (continued)
- [PULL 08/33] block/nvme: Improve nvme_free_req_queue_wait() trace information, Stefan Hajnoczi, 2020/11/04
- [PULL 09/33] block/nvme: Trace queue pair creation/deletion, Stefan Hajnoczi, 2020/11/04
- [PULL 10/33] block/nvme: Move definitions before structure declarations, Stefan Hajnoczi, 2020/11/04
- [PULL 11/33] block/nvme: Use unsigned integer for queue counter/size, Stefan Hajnoczi, 2020/11/04
- [PULL 12/33] block/nvme: Make nvme_identify() return boolean indicating error, Stefan Hajnoczi, 2020/11/04
- [PULL 13/33] block/nvme: Make nvme_init_queue() return boolean indicating error, Stefan Hajnoczi, 2020/11/04
- [PULL 14/33] block/nvme: Introduce Completion Queue definitions, Stefan Hajnoczi, 2020/11/04
- [PULL 15/33] block/nvme: Use definitions instead of magic values in add_io_queue(), Stefan Hajnoczi, 2020/11/04
- [PULL 16/33] block/nvme: Correctly initialize Admin Queue Attributes, Stefan Hajnoczi, 2020/11/04
- [PULL 17/33] block/nvme: Simplify ADMIN queue access, Stefan Hajnoczi, 2020/11/04
- [PULL 18/33] block/nvme: Simplify nvme_cmd_sync(),
Stefan Hajnoczi <=
- [PULL 19/33] block/nvme: Set request_alignment at initialization, Stefan Hajnoczi, 2020/11/04
- [PULL 20/33] block/nvme: Correct minimum device page size, Stefan Hajnoczi, 2020/11/04
- [PULL 22/33] block/nvme: Change size and alignment of queue, Stefan Hajnoczi, 2020/11/04
- [PULL 21/33] block/nvme: Change size and alignment of IDENTIFY response buffer, Stefan Hajnoczi, 2020/11/04
- [PULL 23/33] block/nvme: Change size and alignment of prp_list_pages, Stefan Hajnoczi, 2020/11/04
- [PULL 24/33] block/nvme: Align iov's va and size on host page size, Stefan Hajnoczi, 2020/11/04
- [PULL 26/33] block/nvme: Fix nvme_submit_command() on big-endian host, Stefan Hajnoczi, 2020/11/04
- [PULL 25/33] block/nvme: Fix use of write-only doorbells page on Aarch64 arch, Stefan Hajnoczi, 2020/11/04
- [PULL 27/33] util/vfio-helpers: Improve reporting unsupported IOMMU type, Stefan Hajnoczi, 2020/11/04
- [PULL 29/33] util/vfio-helpers: Trace PCI BAR region info, Stefan Hajnoczi, 2020/11/04