qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] KVM call agenda for June 28


From: Dor Laor
Subject: Re: [Qemu-devel] KVM call agenda for June 28
Date: Tue, 05 Jul 2011 16:39:06 +0300
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.17) Gecko/20110428 Fedora/3.1.10-1.fc15 Lightning/1.0b3pre Thunderbird/3.1.10 ThunderBrowse/3.3.5

On 07/05/2011 03:58 PM, Marcelo Tosatti wrote:
On Tue, Jul 05, 2011 at 01:40:08PM +0100, Stefan Hajnoczi wrote:
On Tue, Jul 5, 2011 at 9:01 AM, Dor Laor<address@hidden>  wrote:
I tried to re-arrange all of the requirements and use cases using this wiki
page: http://wiki.qemu.org/Features/LiveBlockMigration

It would be the best to agree upon the most interesting use cases (while we
make sure we cover future ones) and agree to them.
The next step is to set the interface for all the various verbs since the
implementation seems to be converging.

Live block copy was supposed to support snapshot merge.  I think the
current favored approach is to make the source image a backing file to
the destination image and essentially do image streaming.

Using this mechanism for snapshot merge is tricky.  The COW file
already uses the read-only snapshot base image.  So now we cannot
trivally copy the COW file contents back into the snapshot base image
using live block copy.

It never did. Live copy creates a new image were both snapshot and
"current" are copied to.

This is similar with image streaming.

Not sure I realize what's bad to do in-place merge:

Let's suppose we have this COW chain:

  base <-- s1 <-- s2

Now a live snapshot is created over s2, s2 becomes RO and s3 is RW:

  base <-- s1 <-- s2 <-- s3

Now we've done with s2 (post backup) and like to merge s3 into s2.

With your approach we use live copy of s3 into newSnap:

  base <-- s1 <-- s2 <-- s3
  base <-- s1 <-- newSnap

When it is over s2 and s3 can be erased.
The down side is the IOs for copying s2 data and the temporary storage. I guess temp storage is cheap but excessive IO are expensive.

My approach was to collapse s3 into s2 and erase s3 eventually:

before: base <-- s1 <-- s2 <-- s3
after:  base <-- s1 <-- s2

If we use live block copy using mirror driver it should be safe as long as we keep the ordering of new writes into s3 during the execution. Even a failure in the the middle won't cause harm since the management will keep using s3 until it gets success event.


It seems like snapshot merge will require dedicated code that reads
the allocated clusters from the COW file and writes them back into the
base image.

A very inefficient alternative would be to create a third image, the
"merge" image file, which has the COW file as its backing file:
snapshot (base) ->  cow ->  merge

All data from snapshot and cow is copied into merge and then snapshot
and cow can be deleted.  But this approach is results in full data
copying and uses potentially 3x space if cow is close to the size of
snapshot.

Management can set a higher limit on the size of data that is merged,
and create a new base once exceeded. This avoids copying excessive
amounts of data.

Any other ideas that reuse live block copy for snapshot merge?

Stefan






reply via email to

[Prev in Thread] Current Thread [Next in Thread]