26.10.2016 15:13, Vladimir
26.10.2016 12:21, Vladimir Sementsov-Ogievskiy wrote:
26.10.2016 12:04, Vladimir
25.10.2016 13:53, Vladimir
24.10.2016 20:18, Max Reitz wrote:
On 24.10.2016 19:08, Max Reitz
On 24.10.2016 13:35, Vladimir
By the way, dealing with removal of bitmaps when they are
deleted by the
24.10.2016 13:32, Vladimir
It's not a real problem, but it does take time, and I
maintain that it's
21.10.2016 22:58, Max Reitz
On 21.10.2016 17:34,
Vladimir Sementsov-Ogievskiy wrote:
As bitmaps are sparce in file, I need to allocate
new space when
07.10.2016 22:44, Max
You don't need to free any bitmaps when opening
the file, and you don't
On 30.09.2016 12:53,
Vladimir Sementsov-Ogievskiy wrote:
And what is the use of it?
This flag means that
the bitmap is now in use by the software or
Don't you want to make use of this flag? It
would appear useful to
successfully saved. In any way, with this
flag set the bitmap data
be considered inconsistent and should not be
With current implementation this flag is
never set, as we just remove
bitmaps from the image after loading. But it
defined in qcow2 spec
must be handled. Also, it can be used in
future, if async schemes of
bitmap loading/saving are implemented.
We also remove in-use bitmaps from the image
Signed-off-by: Vladimir Sementsov-Ogievskiy
block/qcow2-bitmap.c | 17
1 file changed, 16 insertions(+), 1
you just marked autoload bitmaps as in_use
instead of deleting them
the image when it's opened and then overwrite
them when the image is
need to allocate any new bitmap space when closing
closing. Or free it...
Is it a real problem to reallocate space in qcow2? If
so, to minimaze
allocations, I will have to make a list of clusters of
in_use bitmaps on
close, and then save current bitmaps to these clusters
some additional clusters, or free extra clusters).
time it doesn't need to take because you can just use
the in_use flag.
I wouldn't worry about reusing clusters of other
bitmaps. Of course we
can do that later on in some optimization but not now.
I just mean the basic case of some auto-loaded bitmap
that is not being
deleted while the VM is running and is just saved back
to the image file
once it is closed. I don't expect that users will always
consume all of
the auto-loaded bitmaps while the VM is active...
Also, if I don't free them on
open, I'll have to free them on remove of
Yes, so? That takes time, but that is something the user
bdrv dirty bitmap..
expect. I wouldn't expect opening of the file to take
Overall, it doesn't matter time-wise whether you free
the bitmap data
when opening the image file or when the dirty bitmap is
by the user or when the image file is closed. Just
setting the single
in_use flag for all of the auto-loaded bitmaps will
basically not take
On the other hand, as soon as just a single auto-loaded
a VM (or qemu-img) invocation, you will very, very
likely safe at least
some time because writing the bitmap to the disk can
reuse at least some
of the existing clusters.
user is also positive when it comes to migration or
read-only disks that
are later reopened R/W: Currently, as far as I can see,
you just keep
the auto-load bitmaps in the image if the image is part of
migration or opened read-only, but then you continue under
assumption that they are removed from the image. That's
not very good.
You are right, I need to handle reopening more carefully. In
my way, I need to delete bitmaps when reopening R/W and in
yours - set in_use bit.
If you delete the bitmaps only when the user asks you to,
then you can
just return an error that the bitmap cannot be removed at
because the image file is read-only or used in migration
(and I don't
think the latter can even happen when it's the user who
removal of the bitmap).
Ok, I'll rewrite this way, using 'in_use' bit and trying to
avoid allocate/free overhead.
Trying to reuse clusters of in_use bitmaps (including
contiguous allocations for bitmap tables) will complicate
things a lot.. We can use extra clusters of one in_use bitmap
to save another one, the same is for bitmap tables. Extra
clusters of old bitmap table (in case of resize) can be used
for saving other bitmap data, etc..
A compromise strategy of reusing:
1. accumulate all data clusters of in_use bitmaps, which was
loaded by this qcow2 instance (we will not touch other old
in_use bitmaps) to free_clusters list
2. for each persistent BdrvDirtyBitmap:
If there is corresponding in_use bitmap in the image, and
its table size == needed table size (there was no resizes), then
let's reuse bitmap table.
else, move old bitmap table clusters to free_clusters and
allocate new table.
3. for each persistent BdrvDirtyBitmap:
For bitmap data clusters, take them from free_clusters
list, and if it is empty - allocate new clusters.
4. free extra clusters from free_clusters list if any.
This strategy is not optimal, but not bad I thing. Is it ok for
you? (I'm not sure that this all is not premature optimization,
and may be true way is to just free all old staff and reallocate
it, as I do.)
With something like this it is not trivial to maintain consistency
in qcow2 image in case of fail on reusing (we can get
double-linked clusters, or dead links in the image on disk)..
Simple solution will be to remove all in_use bitmaps from header
extension, or reallocate for them bitmap tables with all zeroes.
target: we want to reuse clusters of in_use bitmaps, which was
loaded by current qcow2 driver (let's call them our in_use
1. save bitmap directory in memory
2. remove our in_use bitmaps from bitmap directory in disk. from
this point, in case of fail we will finish with some leaking
clusters and no corruptions.
3. free_clusters list = all data clusters of our in_use bitmaps
4. for each our in_use bitmap: If table size is not appropriate
move bitmap table clusters to free_clusters
better is to free it, to not deceive qcow2 allocator.. Can I allocate
several clusters by alloc(size), then assume that I have (size +
cluster_size - 1)/cluster_size clusters and that I can free these
5. save bitmaps, using free_clusters list for data
clusters, old existing bitmap tables and allocating new space when
6. free extra clusters from free_clusters list if any