qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: block stream and bitmaps


From: Kevin Wolf
Subject: Re: block stream and bitmaps
Date: Tue, 24 Mar 2020 11:18:31 +0100
User-agent: Mutt/1.12.1 (2019-06-15)

Am 23.03.2020 um 19:06 hat John Snow geschrieben:
> Hi Kevin,
> 
> I'm hoping to get some preliminary ideas from you (capped at five
> minutes' effort?) on this subject. My ideas here are a bit shaky, I only
> have really rough notions here.
> 
> We want to use bitmaps with 'drive' semantics; i.e. tracking changes to
> the visible guest data. What we have are bitmaps with node semantics,
> tracking changes to the data at a particular level in the graph.
> 
> For commit, this isn't a big deal: we can disable bitmaps in the backing
> file while we commit and then re-enable it on completion. We usually
> have a separate bitmap enabled on the root node that is recording writes
> during this time, and can be moved later.
> 
> For streaming, this is more challenging: new writes will dirty the
> bitmap, but so will writes from the stream job itself.
> 
> Semantically, we want to ignore writes from the stream while recording
> them from the guest -- differentiating based on source.

No, based on source is actually not what you want. What you really want
is that BDRV_REQ_WRITE_UNCHANGED doesn't mark any blocks dirty.

We discussed this specific case of streaming at FOSDEM (with Paolo and
probably Nir). Paolo was even convinced that unchanged writes already
behave like this, but we agreed that dirtying blocks for them would be a
bug. After checking that the code is indeed buggy, I was planning to
send a patch, but never got around to actually do that. Sorry about
that.

> Bitmaps aren't really geared to do that right now. With the changes to
> Bdrv Roles that Max was engineering, do you think it's possible to add
> some kind of write source discrimination to bitmaps, or is that too messy?

I don't think it would work because copy-on-read requests come from the
same parent node as writes (no matter whether the legacy code in
block/io.c or a copy-on-read filter node is used).

> For both commit and stream, it might be nice to say: "This bitmap is
> enabled, but ignores writes from [all? specific types? specific
> instances?] jobs.

Commit is a bit trickier, because it's not WRITE_UNCHANGED. The result
is only unchanged for the top layer, but not for the backing file you're
committing to. Not sure if we can represent this condition somehow.

> Or, I wonder if what we truly want is some kind of bitmap "forwarder"
> object on block-backend objects that represent the semantic drive view,
> and only writes through that *backend* get forwarded to the bitmaps
> attached to whatever node the bitmap is actually associated with.
> 
> (That might wind up causing weird problems too, though... since those
> objects are no longer intended to be user-addressable, managing that
> configuration might get intensely strange.)

Hm... Drive-based does suggest that it's managed at the BlockBackend
level. So having a bitmap that isn't added as a dirty bitmap to the BDS,
but only to the BB does make sense to me. The BB would be addressed
with the qdev ID of the device, as usual (which underlines that it's
really per device).

The part that's unclear to me is how to make such bitmaps persistent.
You can change the root node of a BB and even remove the root node
completely (for removable devices; but even changing is technically
remove followed by insert), so you may need to move the bitmap around
between image files and at least for some time you might not have any
place to store the bitmap.

Or you say that you store it in one specific node, be it the root node
of the BB or not, and it will always stay there no matter how you change
the graph and whether the BB and that node are even in the same subtree.
That node would just get an additonal refcount, so you can't remove it
until the BB goes away.

Unless you already have a better plan (I hope you do, I didn't think
about it for more than a few minutes), maybe the latter would actually
be the most reasonable solution.

Kevin




reply via email to

[Prev in Thread] Current Thread [Next in Thread]