[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH RESEND v4] drive-mirror: add incremental mode

From: Max Reitz
Subject: Re: [Qemu-block] [PATCH RESEND v4] drive-mirror: add incremental mode
Date: Wed, 27 Feb 2019 16:25:51 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.5.1

CC-ing John because of the keyword "incremental".

On 14.02.19 07:43, mahaocong wrote:
> From: mahaocong <address@hidden>
> This patch adds possibility to start mirroring with user-created-bitmap.
> On full mode, mirror create a non-named-bitmap by scanning whole block-chain,
> and on top mode, mirror create a bitmap by scanning the top block layer. So I
> think I can copy a user-created-bitmap and use it as the initial state of the
> mirror, the same as incremental mode drive-backup, and I call this new mode
> as incremental mode drive-mirror.
> A possible usage scene of incremental mode mirror is live migration. For 
> maintain
> the block data and recover after a malfunction, someone may backup data to 
> ceph
> or other distributed storage. On qemu incremental backup, we need to create a 
> new
> bitmap and attach to block device before the first backup job. Then the bitmap
> records the change after the backup job. If we want to migration this vm, we 
> can
> migrate block data between source and destionation by using drive-mirror 
> directly,
> or use backup data and backup-bitmap to reduce the data should be synchronize.
> To do this, we should first create a new image in destination and set backing 
> file
> as backup image, then set backup-bitmap as the initial state of incremental 
> mode
> drive-mirror, and synchronize dirty block starting with the state set by the 
> last
> incremental backup job. When the mirror complete, we get an active layer on 
> destination,
> and its backing file is backup image on ceph. Then we can do live copy data 
> from
> backing files into overlay images by using block-stream, or do backup 
> continually.
> In this scene, It seems that If the guest os doesn't write too many data 
> after the
> last backup, the incremental mode may transmit less data than full mode or top
> mode. However, if the write data is too many, there is no advantage on 
> incremental
> mode compare with other mode.
> This scene can be described as following steps:
> 1.create rbd image in ceph, and map nbd device with rbd image.
> 2.create a new bitmap and attach to block device.
> 3.do full mode backup on nbd device and thus sync it to the rbd image.
> 4.severl times incremental mode backup.
> 5.create new image in destination and set its backing file as backup image.
> 6.do live-migration, and migrate block data by incremental mode drive-mirror
>   with bitmap created from step 2.
> Signed-off-by: Ma Haocong <address@hidden>
> ---

So one important point about incremental backups is that you can
actually do them incrementally: The bitmap is effectively cleared at the
beginning of the backup process (a successor bitmap is installed that is
cleared and receives all changes; at the end of the backup, it either
replaces the old bitmap (on success) or is merged into it (on failure)).
 Therefore, you can do the next incremental backup by using the same bitmap.

How would this work with mirroring?  Instead of clearing the bitmap at
the start of the process, it'd need to be cleared at the end (because we
reach synchronization between source and target then).  But how would
error handling work?

I suppose the named bitmap would need to be copied to act as the dirty
bitmap for the mirror job (at the start of the job).  If a failure
occurs, the copy is simply discarded.  On success, the named bitmap is
cleared when the job is completed.  Hm, that seems to make sense.  Did I
forget anything, John?

In any case, I don't think this patch implemented anything to this
regard...?  So it doesn't really implement incremental mirroring.
However, I think it should, if possible.


Attachment: signature.asc
Description: OpenPGP digital signature

reply via email to

[Prev in Thread] Current Thread [Next in Thread]