qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/2] nbd: Add new qemu:joint-allocation metadata context


From: Vladimir Sementsov-Ogievskiy
Subject: Re: [PATCH 2/2] nbd: Add new qemu:joint-allocation metadata context
Date: Thu, 10 Jun 2021 17:10:34 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0

10.06.2021 16:47, Eric Blake wrote:
On Thu, Jun 10, 2021 at 03:30:17PM +0300, Vladimir Sementsov-Ogievskiy wrote:
The correct fix is for ovirt to additionally use the
qemu:allocation-depth metadata context added in 5.2: after all, the
actual determination for what is needed to recreate a qcow2 file is
not whether a cluster is sparse, but whether the allocation-depth
shows the cluster to be local.  But reproducing an image is more
efficient when handling known-zero clusters, which means that ovirt
has to track both base:allocation and qemu:allocation-depth metadata
contexts simultaneously.  While NBD_CMD_BLOCK_STATUS is just fine
sending back information for two contexts in parallel, it comes with
some bookkeeping overhead at the client side: the two contexts need
not report the same length of replies, and it involves more network
traffic.

Aren't both context described in one reply? Or what do you mean by not the same 
length?

The example file demonstrates this.  We have:

base.raw    ABC-
top.qcow2   -D0-
guest sees  AD00

Querying base:allocation returns:
          0       65536    3  hole,zero
      65536       65536    0  allocated
     131072      131072    3  hole,zero

Querying qemu:allocation-depth returns:
          0       65536    0  unallocated
      65536      131072    1  local
     196608       65536    0  unallocated

Hmm right. Sorry, I forget how BLOCK_STATUS works for several contexts. I 
thought they are combined.


That is, the query starting at 64k returns different lengths (64k for
base:allocation, 128k for qemu:allocation-depth), and the client has
to process the smaller of the two regions before moving on to the next
query.  But if the client then does a query starting at 128k, it
either has to remember that it previously has information available
from the earlier qemu:allocation-depth, or repeats efforts over the
wire.

Hmm.. but if we are going to combine contexts in qemu, we face the same 
problem, as sources of the contexts may return information by different chunks, 
so we'll have to cache something, or query the same thing twice. But yes, at 
least we avoid doing it thought the net.


The joy of having a single metadata context return both pieces of
information at once is that the client no longer has to do this
cross-correlation between the differences in extent lengths of the
parallel contexts.

We discussed in the past the option to expose also the dirty status of every
block in the response. Again this info is available using
"qemu:dirty-bitmap:xxx"
but just like allocation depth and base allocation, merging the results is hard
and if we could expose also the dirty bit, this can make clients life
even better.
In this case I'm not sure "qemu:allocation" is the best name, maybe something
more generic like "qemu:extents" or "qemu:block-status" is even better.


Oops. Could you please describe, what's the problem with parsing several 
context simultaneously?

There is no inherent technical problem, just extra work.  Joining the
work at the server side is less coding effort than recoding the
boilerplate to join the work at every single client side.  And the
information is already present.  So we could just scrap this entire
RFC by stating that the information is already available, and it is
not worth qemu's effort to provide the convenience context.

Joining base:allocation and qemu:allocation-depth was easy - in fact,
since both use bdrv_block_status under the hood, we could (and
probably should!) merge it into a single qemu query.  But joining
base:allocation and qemu:dirty-bitmap:FOO will be harder, at which
point I question whether it is worth the complications.  And if you
argue that a joint context is not worthwhile without dirty bitmap(s)
being part of that joint context, then maybe this RFC is too complex
to worry about, and we should just leave the cross-correlation of
parallel contexts to be client-side, after all.



This all sound to me as we are going to implement "joint" combined conexts for 
every useful combination of existing contexts that user wants. So, it's a kind of 
workaround of inconvenient protocol we have invented in the past.

Doesn't it mean that we instead should rework, how we export several contexts? 
Maybe we can improve generic export of several contexts simultaneously, so that 
it will be convenient for the client? Than we don't need any additional 
combined contexts.

The NBD protocol intentionally left wiggle room for servers to report
different extent lengths across different contexts.  But other than
qemu, I don't know of any other NBD servers advertising alternate
contexts.  If we think we can reasonbly restrict the NBD protocol to
require that any server sending parallel contexts to a client MUST use
the same extent lengths for all parallel contexts (clients still have
to read multiple contexts, but the cross-correlation becomes easier
because the client doesn't have to worry about length mismatches), and
code that up in qemu, that's also something we can consider.

Or maybe even have it be an opt-in, where a client requests
NBD_OPT_ALIGN_META_CONTEXT; if the server acknowledges that option,
the client knows that it can request parallel NBD_OPT_SET_META_CONTEXT
and the extents replied to each NBD_OPT_BLOCK_STATUS will be aligned;
if the server does not acknowledge the option, then the client has the
choice of requesting at most one meta context, or else dealing with
unmatched extent lengths itself.

Yes, that sound good. And that will work for any combination of contexts.

Actually, when server doesn't support _ALIGN_, client's behavior may be simple 
ignoring of longer lengths, shrinking all replies to the minimum of returned 
lengths. This leads to larger network traffic and probably some extra work on 
server side, but client logic remains simpler and all problems are fixed if 
server supports _ALIGN_.

--
Best regards,
Vladimir



reply via email to

[Prev in Thread] Current Thread [Next in Thread]