qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v8 0/2] qemu-img info lists bitmap directory ent


From: Vladimir Sementsov-Ogievskiy
Subject: Re: [Qemu-devel] [PATCH v8 0/2] qemu-img info lists bitmap directory entries
Date: Wed, 30 Jan 2019 13:28:46 +0000

30.01.2019 15:43, Eric Blake wrote:
> On 1/30/19 2:00 AM, Vladimir Sementsov-Ogievskiy wrote:
> 
>>> So, I'm trying to test this, and I've discovered something rather
>>> annoying about persistent snapshots: they DON'T get written to disk
>>> until the qemu process exits.  In other words, even after creating a
>>> persistent bitmap via QMP (I'm trying to debug my libvirt API for
>>> incremental snapshots, so I did this via 'virsh snapshot-create-as $dom
>>> name', but it boils down to a QMP transaction with
>>> 'block-dirty-bitmap-add' as one of the commands), running:
>>>
>>> $ qemu-img info -U Active1.qcow2
>>>
>>> shows
>>>       bitmaps:
>>>       refcount bits: 16
>>>
>>> for as long as the qemu process is running.
> 
>>
>> But what is the benefit of it, except qemu-img info with --force-share 
>> option,
>> which of course is not guaranteed to show valid metadata?
>>
>> While qemu is running valid way to obtain info is qmp. Do libvirt call 
>> qemu-img
>> --fore-share?
> 
> You're right that libvirt will be using QMP and not qemu-img. But that
> does not prevent other clients from using qemu-img on a file

I just thing that using --force-share in production is wrong thing to do. So,
it is needed only for debugging.. And, then, flushing bitmaps may be needed
only for debugging. So, why to flush in production?

>, regardless
> of whether the file is in use by a guest. There's also the argument that
> if qemu dies suddenly (most likely due to a bug or to a host being
> fenced), knowing what bitmaps were present in the image

actually, they didn't. They've never been in the image, only in RAM.. Like
non-persistent bitmaps.

  is better than
> having nothing at all (but conversely, if qemu dies suddenly, you are
> likely missing the bitmap data for the most recent changes, so even if
> disabled bitmaps are flushed and no longer marked in-use, they still
> won't be sufficient to allow an incremental backup, and a full backup
> will be necessary regardless of how much or little bitmap information
> got persisted).

exactly. But yes, flushing disabled bitmaps may theoretically make sense, as
they are at least valid. But about disabled bitmaps, I think it would be more
useful to teach Qemu load/unload them on demand, to not use extra RAM when
they are not needed (most of the time they don't).

> 
> While you're right that --force-share is not required to show up-to-date
> metadata, it also is not required to show nothing at all.  And at least
> knowing that a persistent bitmap is associated with a qcow2 file may
> make other things obvious - such as the fact that the image can't be
> resized (until we implement the functionality to support resize and
> bitmaps in the same image).

Hm, yes, if we have invalid IN_USE bitmaps in the image, we can't resize it..
But is it a good reason to store these bitmaps? If we don't store them,
image could be resized, why not (of course, if it is not corrupted, as
image was not closed properly)..


It all depends on the meaning of the 'persistent' bitmap. For me,
persistent means 'bitmap will be saved to the image on close'.
And they are implemented in this way. And documented in this way too:
# @persistent: the bitmap is persistent, i.e. it will be saved to the
#              corresponding block device image file on its close.

But you see them as "bitmaps which exist in the image".

I don't see real reason to flush them while Qemu running. --force-share is a 
back-door,
it should not be used, and I doubt that implementing something
especially for --force-share is a good idea.
Then, the only difference, is that we'll see IN_USE bitmaps in the
image if qemu stopped unexpectedly. And the only meaningful thing to
do with these invalid bitmaps is removing them.


-- 
Best regards,
Vladimir

reply via email to

[Prev in Thread] Current Thread [Next in Thread]