qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH 16/15 v13] block/block-copy: fix block_copy


From: Vladimir Sementsov-Ogievskiy
Subject: [PATCH 16/15 v13] block/block-copy: fix block_copy
Date: Wed, 25 Sep 2019 19:58:08 +0300

block_copy_reset_unallocated may yield, and during this yield someone
may handle dirty bits which we are handling. Calling block_copy_with_*
functions on non-dirty region will lead to copying updated data, which
is wrong.

To be sure, that we call block_copy_with_* functions on dirty region,
check dirty bitmap _after_ block_copy_reset_unallocated.

Signed-off-by: Vladimir Sementsov-Ogievskiy <address@hidden>
---

Hi!

Suddenly I understand that there is a bug in

[PATCH v13 15/15] block/backup: use backup-top instead of write notifiers
(queued at Max's https://git.xanclic.moe/XanClic/qemu/commits/branch/block)

And here is a fix, which may be squashed to
"block/backup: use backup-top instead of write notifiers" commit.

 block/block-copy.c | 31 +++++++++++++++++++++----------
 1 file changed, 21 insertions(+), 10 deletions(-)

diff --git a/block/block-copy.c b/block/block-copy.c
index 55bc360d22..430b88124f 100644
--- a/block/block-copy.c
+++ b/block/block-copy.c
@@ -292,7 +292,7 @@ int coroutine_fn block_copy(BlockCopyState *s,
     assert(QEMU_IS_ALIGNED(end, s->cluster_size));
 
     while (start < end) {
-        int64_t dirty_end;
+        int64_t chunk_end = end, dirty_end;
 
         if (!bdrv_dirty_bitmap_get(s->copy_bitmap, start)) {
             trace_block_copy_skip(s, start);
@@ -300,12 +300,6 @@ int coroutine_fn block_copy(BlockCopyState *s,
             continue; /* already copied */
         }
 
-        dirty_end = bdrv_dirty_bitmap_next_zero(s->copy_bitmap, start,
-                                                (end - start));
-        if (dirty_end < 0) {
-            dirty_end = end;
-        }
-
         if (s->skip_unallocated) {
             ret = block_copy_reset_unallocated(s, start, &status_bytes);
             if (ret == 0) {
@@ -313,20 +307,37 @@ int coroutine_fn block_copy(BlockCopyState *s,
                 start += status_bytes;
                 continue;
             }
+
+            if (!bdrv_dirty_bitmap_get(s->copy_bitmap, start)) {
+                /*
+                 * Someone already handled this bit during yield in
+                 * block_copy_reset_unallocated.
+                 */
+                trace_block_copy_skip(s, start);
+                start += s->cluster_size;
+                continue;
+            }
+
             /* Clamp to known allocated region */
-            dirty_end = MIN(dirty_end, start + status_bytes);
+            chunk_end = MIN(chunk_end, start + status_bytes);
+        }
+
+        dirty_end = bdrv_dirty_bitmap_next_zero(s->copy_bitmap, start,
+                                                chunk_end - start);
+        if (dirty_end >= 0) {
+            chunk_end = MIN(chunk_end, dirty_end);
         }
 
         trace_block_copy_process(s, start);
 
         if (s->use_copy_range) {
-            ret = block_copy_with_offload(s, start, dirty_end);
+            ret = block_copy_with_offload(s, start, chunk_end);
             if (ret < 0) {
                 s->use_copy_range = false;
             }
         }
         if (!s->use_copy_range) {
-            ret = block_copy_with_bounce_buffer(s, start, dirty_end,
+            ret = block_copy_with_bounce_buffer(s, start, chunk_end,
                                                 error_is_read, &bounce_buffer);
         }
         if (ret < 0) {
-- 
2.21.0




reply via email to

[Prev in Thread] Current Thread [Next in Thread]