qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH 4/9] mirror: efficiently zero out target


From: Denis V. Lunev
Subject: Re: [Qemu-block] [PATCH 4/9] mirror: efficiently zero out target
Date: Wed, 15 Jun 2016 11:46:49 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1

On 06/15/2016 06:00 AM, Eric Blake wrote:
On 06/14/2016 09:25 AM, Denis V. Lunev wrote:
With a bdrv_co_write_zeroes method on a target BDS zeroes will not be placed
into the wire. Thus the target could be very efficiently zeroed out. This
is should be done with the largest chunk possible.

This improves the performance of the live migration of the empty disk by
150 times if NBD supports write_zeroes.

Signed-off-by: Denis V. Lunev <address@hidden>
Reviewed-by: Vladimir Sementsov-Ogievskiy<address@hidden>
CC: Stefan Hajnoczi <address@hidden>
CC: Fam Zheng <address@hidden>
CC: Kevin Wolf <address@hidden>
CC: Max Reitz <address@hidden>
CC: Jeff Cody <address@hidden>
CC: Eric Blake <address@hidden>
---
  block/mirror.c | 32 +++++++++++++++++++++++++++++++-
  1 file changed, 31 insertions(+), 1 deletion(-)

diff --git a/block/mirror.c b/block/mirror.c
index c7b3639..c2f8773 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -21,6 +21,7 @@
  #include "qemu/ratelimit.h"
  #include "qemu/bitmap.h"
+#define MIRROR_ZERO_CHUNK (3u << (29 - BDRV_SECTOR_BITS)) /* 1.5 Gb */
Probably nicer to track this in bytes.  And do you really want a
hard-coded arbitrary limit, or is it better to live with
MIN_NON_ZERO(target_bs->bl.max_pwrite_zeroes, INT_MAX)?
unfortunately we should. INT_MAX is not aligned as required.
May be we should align INT_MAX properly to fullfill
write_zeroes alignment.

Hmm, may be we can align INT_MAX properly down. OK,
I'll try to do that gracefully.

@@ -512,7 +513,8 @@ static int mirror_dirty_init(MirrorBlockJob *s)
end = s->bdev_length / BDRV_SECTOR_SIZE; - if (base == NULL && !bdrv_has_zero_init(target_bs)) {
+    if (base == NULL && !bdrv_has_zero_init(target_bs) &&
+            target_bs->drv->bdrv_co_write_zeroes == NULL) {
Indentation is off, although if checkpatch.pl doesn't complain I guess
it doesn't matter that much.

Why should you care whether the target_bs->drv implements a callback?
Can't you just rely on the normal bdrv_*() functions to do the dirty
work of picking the most efficient implementation without you having to
bypass the block layer?  In fact, isn't that the whole goal of
bdrv_make_zero() - why not call that instead of reimplementing it?
this is the idea of the patch actually. If the callback is not implemented, we
will have zeroes actually written or send to the wire. In this case there is
not much sense to do that, the amount of data actually written will be
significantly increased (some areas will be written twice - with zeroes and
with the actual data).

On the other hand, if callback is implemented, we will have very small amount
of data in the wire and written actually and thus will have a benefit. I am
trying to avoid very small chunks of data. Here (during the migration process)
the data is sent with 10 Mb chunks and with takes a LOT of time with NBD.
We can send chunks 1.5 Gb (currently). They occupies the same 26 bytes of data
on the transport layer.

Patch needs rebasing - we've redone this into bdrv_co_pwrite_zeroes and
a byte interface, since upstream commit c1499a5e.
sure!

          bdrv_set_dirty_bitmap(s->dirty_bitmap, 0, end);
          return 0;
      }
@@ -546,6 +548,34 @@ static int mirror_dirty_init(MirrorBlockJob *s)
          }
          sector_num += n;
      }
+
+    if (base != NULL || bdrv_has_zero_init(target_bs)) {
You're now repeating the conditional that used to be 'bool
mark_all_dirty' (well, this is !mark_all_dirty); is it worth keeping the
simpler bool around?
not quite. The difference is in the presence of the callback,
but sure I can cache it. no prob.

+        /* no need to zero out entire disk */
+        return 0;
+    }
+
+    for (sector_num = 0; sector_num < end; ) {
+        int nb_sectors = MIN(MIRROR_ZERO_CHUNK, end - sector_num);
Why limit yourself to 1.5G? It's either too small for what you can
really do, or too large for what the device permits.  See my above
comment about MIN_NON_ZERO.
alignment, covered above

+        int64_t now = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
+
+        if (now - last_pause_ns > SLICE_TIME) {
+            last_pause_ns = now;
+            block_job_sleep_ns(&s->common, QEMU_CLOCK_REALTIME, 0);
+        }
+
+        if (block_job_is_cancelled(&s->common)) {
+            return -EINTR;
+        }
+
+        if (s->in_flight == MAX_IN_FLIGHT) {
+            trace_mirror_yield(s, s->in_flight, s->buf_free_count, -1);
+            mirror_wait_for_io(s);
+            continue;
+        }
Hmm - I guess your mirror yield points are why you couldn't just
directly use bdrv_make_zero(); but is that something where some code
refactoring can share more code rather than duplicating it?
the purpose is to put several requests into the wire in parallel.
Original mirror code do this nicely and thus is reused.

+
+        mirror_do_zero_or_discard(s, sector_num, nb_sectors, false);
+        sector_num += nb_sectors;
+    }
      return 0;
  }




reply via email to

[Prev in Thread] Current Thread [Next in Thread]