qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [PATCH] block/io: fix bdrv_co_do_copy_on_readv error handli


From: Vladimir Sementsov-Ogievskiy
Subject: [Qemu-devel] [PATCH] block/io: fix bdrv_co_do_copy_on_readv error handling
Date: Thu, 14 Mar 2019 13:14:15 +0300

It's not safe to treat bdrv_is_allocated error as unallocated: if we
mistake we may rewrite guest data.

Signed-off-by: Vladimir Sementsov-Ogievskiy <address@hidden>
---
 block/io.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/block/io.c b/block/io.c
index 2ba603c7bc..dccad64d46 100644
--- a/block/io.c
+++ b/block/io.c
@@ -1193,11 +1193,13 @@ static int coroutine_fn 
bdrv_co_do_copy_on_readv(BdrvChild *child,
         ret = bdrv_is_allocated(bs, cluster_offset,
                                 MIN(cluster_bytes, max_transfer), &pnum);
         if (ret < 0) {
-            /* Safe to treat errors in querying allocation as if
-             * unallocated; we'll probably fail again soon on the
-             * read, but at least that will set a decent errno.
+            /*
+             * We must fail here, and can't treat error as allocated or
+             * unallocated: if we mistake, it will lead to not done 
copy-on-read
+             * in first case and to rewriting guest data that is already in top
+             * layer in the second case.
              */
-            pnum = MIN(cluster_bytes, max_transfer);
+            goto err;
         }
 
         /* Stop at EOF if the image ends in the middle of the cluster */
@@ -1208,7 +1210,7 @@ static int coroutine_fn 
bdrv_co_do_copy_on_readv(BdrvChild *child,
 
         assert(skip_bytes < pnum);
 
-        if (ret <= 0) {
+        if (ret == 0) {
             /* Must copy-on-read; use the bounce buffer */
             pnum = MIN(pnum, MAX_BOUNCE_BUFFER);
             qemu_iovec_init_buf(&local_qiov, bounce_buffer, pnum);
-- 
2.18.0




reply via email to

[Prev in Thread] Current Thread [Next in Thread]