qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [Qemu-devel] [PATCH] mirror: add sync mode incremental


From: Denis V. Lunev
Subject: Re: [Qemu-block] [Qemu-devel] [PATCH] mirror: add sync mode incremental to drive-mirror and blockdev-mirror
Date: Thu, 11 May 2017 16:28:14 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0

On 05/11/2017 04:16 PM, Daniel Kučera wrote:
>
> 2017-05-10 17:05 GMT+02:00 Denis V. Lunev <address@hidden
> <mailto:address@hidden>>:
>
>     On 05/10/2017 05:00 PM, Stefan Hajnoczi wrote:
>     > On Wed, May 10, 2017 at 03:25:31PM +0200, Denis V. Lunev wrote:
>     >> On 05/09/2017 06:52 PM, Stefan Hajnoczi wrote:
>     >>> On Mon, May 08, 2017 at 05:07:18PM -0400, John Snow wrote:
>     >>>> On 05/08/2017 05:02 PM, Denis V. Lunev wrote:
>     >>>>> On 05/08/2017 10:35 PM, Stefan Hajnoczi wrote:
>     >>>>>> On Thu, May 04, 2017 at 12:54:40PM +0200, Daniel Kucera wrote:
>     >>>>>>
>     >>>>>> Seems like a logical extension along the same lines as the
>     backup block
>     >>>>>> job's dirty bitmap sync mode.
>     >>>>>>
>     >>>>>>> parameter bitmap chooses existing dirtymap instead of
>     newly created
>     >>>>>>> in mirror_start_job
>     >>>>>>>
>     >>>>>>> Signed-off-by: Daniel Kucera <address@hidden
>     <mailto:address@hidden>>
>     >>>>> Can you pls describe the use case pls in a bit more details.
>     >>>>>
>     >>>>> For now this could be a bit strange:
>     >>>>> - dirty bitmap, which can be found via bdrv_create_dirty_bitmap
>     >>>>>   could be read-only or read-write, i.e. being modified by
>     writes
>     >>>>>   or be read-only, which should not be modified. Thus adding
>     >>>>>   r/o bitmap to the mirror could result in interesting things.
>     >>>>>
>     >>>> This patch as it was submitted does not put the bitmap into a
>     read-only
>     >>>> mode; it leaves it RW and modifies it as it processes the
>     mirror command.
>     >>>>
>     >>>> Though you do raise a good point; this bitmap is now in-use
>     by a job and
>     >>>> should not be allowed to be deleted by the user, but our existing
>     >>>> mechanism treats a locked bitmap as one that is also in R/O
>     mode. This
>     >>>> would be a different use case.
>     >>>>
>     >>>>> Minimally we should prohibit usage of r/o bitmaps this way.
>     >>>>>
>     >>>>> So, why to use mirror, not backup for the case?
>     >>>>>
>     >>>> My guess is for pivot semantics.
>     >>> Daniel posted his workflow in a previous revision of this series:
>     >>>
>     >>> He is doing a variation on non-shared storage migration with
>     the mirror
>     >>> block job, but using the ZFS send operation to transfer the
>     initial copy
>     >>> of the disk.
>     >>>
>     >>> Once ZFS send completes it's necessary to transfer all the
>     blocks that
>     >>> were dirtied while the transfer was taking place.
>     >>>
>     >>> 1. Create dirty bitmap and start tracking dirty blocks in QEMU.
>     >>> 2. Snapshot and send ZFS volume.
>     >>> 3. mirror sync=bitmap after ZFS send completes.
>     >>> 4. Live migrate.
>     >>>
>     >>> Stefan
>     >> thank you very much. This is clear now.
>     >>
>     >> If I am not mistaken,  this can be very easy done with
>     >> the current implementation without further QEMU modifications.
>     >> Daniel just needs to start mirror and put it on pause for the
>     >> duration of stage (2).
>     >>
>     >> Will this work?
>     > I think it's a interesting idea but I'm not sure if sync=none +
>     pause
>     > can be done atomically.  Without atomicity a block might be sent
>     to the
>     > destination while the ZFS send is still in progress.
>     >
>     > Stefan
>     Atomicity here is completely impossible.
>
>     The case is like this.
>
>     1) start the mirror
>     2) pause the mirror
>     3) snapshot + ZFS send
>     4) resume mirror
>     5) live migrate
>
>     The worst case problem - some additional blocks which
>     would be send twice. This should not be very big deal.
>     This is actually which backup always does. The amount
>     of such blocks will not be really big.
>
>     Den
>
>
> I guess it won't be possible to start mirror in 1) or it will
> instantly fail because the block device on destination doesn't exist
> at that moment, so it's not even possible to start nbd server.
>
> Or am I wrong?
>
good point, by I guess you can create empty volume of the
proper size it at step 0, setup QEMU mirror and start to copy
the data to that volume. I may be completely wrong here as
I do not know ZFS management procedures and tools.

Can you share the commands you are using to perform the
op? May be we will be able to find suitable solution.

Den




reply via email to

[Prev in Thread] Current Thread [Next in Thread]