[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCHv3] block-migration: efficiently encode zero bloc

From: Peter Lieven
Subject: Re: [Qemu-devel] [PATCHv3] block-migration: efficiently encode zero blocks
Date: Tue, 16 Jul 2013 09:10:56 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130623 Thunderbird/17.0.7

On 15.07.2013 23:27, Eric Blake wrote:
On 07/15/2013 04:55 AM, Peter Lieven wrote:
this patch adds an efficient encoding for zero blocks by
adding a new flag indiciating a block is completly zero.

additionally bdrv_write_zeros() is used at the destination
to efficiently write these zeroes.

patch revision history belongs outside of the commit message proper...

  - changed type of flags in blk_send() from int to uint64_t
  - added migration capability setting to enable sending
    of zero blocks.

Signed-off-by: Peter Lieven <address@hidden>
...here, after the --- separator.

  block-migration.c             |   29 +++++++++++++++++++++++------
  include/migration/migration.h |    1 +
  migration.c                   |    9 +++++++++
  qapi-schema.json              |    7 ++++++-
  4 files changed, 39 insertions(+), 7 deletions(-)
+++ b/qapi-schema.json
@@ -613,10 +613,15 @@
  #          Disabled by default. Experimental: may (or may not) be renamed 
  #          further testing is complete. (since 1.6)
+# @zero-blocks: During storage migration encode blocks of zeroes efficiently. 
+#          essentially saves 1MB of zeroes per block on the wire. Enabling 
+#          source and target VM to support this feature. Disabled by default. 
(since 1.6)
Does this capability have to be explicitly set on the receiving end, or
can it be automatic?  I'd prefer automatic - where only the sending end
has to explicitly turn on the optimization.
Only on the sending end. But you have to check that the receiver supports it as
you figured out. I can update the comments if you like.

Are there any downsides to unconditionally using this when it is
supported on both sides?  With xbzrle, there are workloads where

Downsides, not that I know of. The problem with xbzrle is that is is
very complex and memory and network speed my be so high that
not using XBZRLE can be better than enabling it. Here the only
penalty is the zero blocks check which is lightning fast compared to
disk access and the data is in memory anyway.

The benefit is that you gain a lot.

a) you save network bandwidth (which might be low).
b) you can explicitly write zeroes at the end. with the write zero
optimizations that exist for various drivers this can be a huge benefit
in the form that it keeps the target thin-provisioned.
Otherwise a block migration would always mean the target is fully


reply via email to

[Prev in Thread] Current Thread [Next in Thread]