[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PATCH 24/25] block/nvme: Align iov's va and size on host page size
From: |
Philippe Mathieu-Daudé |
Subject: |
[PATCH 24/25] block/nvme: Align iov's va and size on host page size |
Date: |
Tue, 27 Oct 2020 14:55:46 +0100 |
From: Eric Auger <eric.auger@redhat.com>
Make sure iov's va and size are properly aligned on the
host page size.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
block/nvme.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/block/nvme.c b/block/nvme.c
index e3626045565..c1c52bae44f 100644
--- a/block/nvme.c
+++ b/block/nvme.c
@@ -1018,11 +1018,12 @@ static coroutine_fn int
nvme_cmd_map_qiov(BlockDriverState *bs, NvmeCmd *cmd,
for (i = 0; i < qiov->niov; ++i) {
bool retry = true;
uint64_t iova;
+ size_t len = QEMU_ALIGN_UP(qiov->iov[i].iov_len,
+ qemu_real_host_page_size);
try_map:
r = qemu_vfio_dma_map(s->vfio,
qiov->iov[i].iov_base,
- qiov->iov[i].iov_len,
- true, &iova);
+ len, true, &iova);
if (r == -ENOMEM && retry) {
retry = false;
trace_nvme_dma_flush_queue_wait(s);
@@ -1166,8 +1167,9 @@ static inline bool nvme_qiov_aligned(BlockDriverState *bs,
BDRVNVMeState *s = bs->opaque;
for (i = 0; i < qiov->niov; ++i) {
- if (!QEMU_PTR_IS_ALIGNED(qiov->iov[i].iov_base, s->page_size) ||
- !QEMU_IS_ALIGNED(qiov->iov[i].iov_len, s->page_size)) {
+ if (!QEMU_PTR_IS_ALIGNED(qiov->iov[i].iov_base,
+ qemu_real_host_page_size) ||
+ !QEMU_IS_ALIGNED(qiov->iov[i].iov_len, qemu_real_host_page_size)) {
trace_nvme_qiov_unaligned(qiov, i, qiov->iov[i].iov_base,
qiov->iov[i].iov_len, s->page_size);
return false;
@@ -1183,7 +1185,7 @@ static int nvme_co_prw(BlockDriverState *bs, uint64_t
offset, uint64_t bytes,
int r;
uint8_t *buf = NULL;
QEMUIOVector local_qiov;
-
+ size_t len = QEMU_ALIGN_UP(bytes, qemu_real_host_page_size);
assert(QEMU_IS_ALIGNED(offset, s->page_size));
assert(QEMU_IS_ALIGNED(bytes, s->page_size));
assert(bytes <= s->max_transfer);
@@ -1193,7 +1195,7 @@ static int nvme_co_prw(BlockDriverState *bs, uint64_t
offset, uint64_t bytes,
}
s->stats.unaligned_accesses++;
trace_nvme_prw_buffered(s, offset, bytes, qiov->niov, is_write);
- buf = qemu_try_memalign(s->page_size, bytes);
+ buf = qemu_try_memalign(qemu_real_host_page_size, len);
if (!buf) {
return -ENOMEM;
--
2.26.2
- Re: [PATCH 17/25] block/nvme: Simplify nvme_cmd_sync(), (continued)
- [PATCH 19/25] block/nvme: Set request_alignment at initialization, Philippe Mathieu-Daudé, 2020/10/27
- [PATCH 20/25] block/nvme: Correct minimum device page size, Philippe Mathieu-Daudé, 2020/10/27
- [PATCH 21/25] block/nvme: Change size and alignment of IDENTIFY response buffer, Philippe Mathieu-Daudé, 2020/10/27
- [PATCH 23/25] block/nvme: Change size and alignment of prp_list_pages, Philippe Mathieu-Daudé, 2020/10/27
- [PATCH 22/25] block/nvme: Change size and alignment of queue, Philippe Mathieu-Daudé, 2020/10/27
- [PATCH 24/25] block/nvme: Align iov's va and size on host page size,
Philippe Mathieu-Daudé <=
- [RFC PATCH 25/25] block/nvme: Fix use of write-only doorbells page on Aarch64 arch, Philippe Mathieu-Daudé, 2020/10/27
- Re: [PATCH 00/25] block/nvme: Fix Aarch64 host, Auger Eric, 2020/10/28