qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Qemu and Changed Block Tracking


From: John Snow
Subject: Re: [Qemu-devel] Qemu and Changed Block Tracking
Date: Mon, 27 Feb 2017 15:39:50 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.7.0


On 02/24/2017 04:44 PM, Eric Blake wrote:
> On 02/24/2017 03:31 PM, John Snow wrote:
>>>
>>> But the Backup Server could instead connect to the NAS directly avoiding
>>> load on the frontent LAN
>>> and the Qemu Node.
>>>
>>
>> In a live backup I don't see how you will be removing QEMU from the data
>> transfer loop. QEMU is the only process that knows what the correct view
>> of the image is, and needs to facilitate.
>>
>> It's not safe to copy the blocks directly without QEMU's mediation.
> 
> Although we may already have enough tools in place to help achieve that:
> create a temporary qcow2 wrapper around the primary image via external
> snapshot, so that the primary image is now read-only in qemu; then use
> whatever block-status mechanism (whether the NBD block status extension,
> or directly reading from a persistent bitmap) to facilitate whatever
> more efficient offline transfer of just the relevant portions of that
> main file, then live block-commit to get qemu to start writing to the
> file again.
> 

Right, really good point. We can just turn the "live" backup into a
not-live one (kind of!) to work around the constraint.

In this case, creating the external snapshot should probably create a
"new" bitmap on the root, leaving the old one behind on the backing
file. This avoids spurious copies of data that hasn't changed in the
backing file, and makes clearing the bitmap on success easier for us.
Once the snapshots are re-merged, we can merge their respective bitmaps
again.

This can work in some scenarios, sure! We may have to be careful about
how exactly bitmaps fork when you create new external snapshots, but
that does seem workable and (possibly) the most performant, if that's a
concern.

--js

> In other words, any time your algorithm wants to cause an I/O freeze to
> a particular file, the solution is to add a qcow2 external snapshot
> followed by a live commit.
> 
> So tweaking the proposal a few mails ago:
> 
> fsfreeze (optional)
> create qcow2 snapshot wrapper as a write lock (via QMP)
> fsthaw - now with no risk of violating guest timing constraints
> dirtymap = find all blocks that are dirty since last backup (via named
> bitmap/NBD block status)
> foreach block in dirtymap {
>                copy to backup via external software
> }
> live commit image (via QMP)
> 
> The window where guest I/O is frozen is small (the freeze/snapshot
> create/thaw steps can be done in less than a second), while the window
> where you are extracting incremental backup data is longer (during that
> time, guest I/O is happening into a wrapper qcow2 file).
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]