[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH v6 22/42] block: Fix bdrv_get_allocated_file_siz

From: Kevin Wolf
Subject: Re: [Qemu-block] [PATCH v6 22/42] block: Fix bdrv_get_allocated_file_size's fallback
Date: Wed, 11 Sep 2019 10:27:43 +0200
User-agent: Mutt/1.12.1 (2019-06-15)

Am 11.09.2019 um 09:37 hat Max Reitz geschrieben:
> On 11.09.19 08:55, Kevin Wolf wrote:
> > Am 11.09.2019 um 08:20 hat Max Reitz geschrieben:
> >> On 10.09.19 16:52, Kevin Wolf wrote:
> >>> Am 09.08.2019 um 18:13 hat Max Reitz geschrieben:
> >>>> If the driver does not implement bdrv_get_allocated_file_size(), we
> >>>> should fall back to cumulating the allocated size of all non-COW
> >>>> children instead of just bs->file.
> >>>>
> >>>> Suggested-by: Vladimir Sementsov-Ogievskiy <address@hidden>
> >>>> Signed-off-by: Max Reitz <address@hidden>
> >>>
> >>> This smells like an overgeneralisation, but if we want to count all vmdk
> >>> extents, the qcow2 external data file, etc. it's an improvement anyway.
> >>> A driver that has a child that should not be counted must just remember
> >>> to implement the callback.
> >>>
> >>> Let me think of an example... How about quorum, for a change? :-)
> >>> Or the second blkverify child.
> >>>
> >>> Or eventually the block job filter nodes.
> >>
> >> I actually think it makes sense for all of these nodes to report the sum
> >> of all of their children’s allocated sizes.
> > 
> > Hm... Yes, in a way. But not much more than it would make sense to
> > report the sum of the sizes of all images in the whole backing chain
> > (this is a useful thing to ask for, just maybe not the right thing to
> > return for a low-level interface). But I can accept that it's maybe a
> > bit more expected for quorum and blkverify than for COW images.
> > 
> > If you include the block job filter nodes, I have to disagree, though.
> > If mirror_top_bs (or any other job filter) sits in the middle of the
> > source chain, then I certainly don't want to see the target size added
> > to it.
> Hm, I don’t care much either way.  I think it makes complete sense to
> add the target size there, but OTOH it’s only temporary while the job
> runs, so it may be a bit confusing if it suddenly goes up and then down
> again.

I think the number that most users are interested in is knowing how much
space the image for their /dev/vda takes up on the host.

I can see how they might be interested in not only that one image file,
but all other image files connected to it, i.e. their /dev/vda with all
of its snapshots. This would mean counting backing files. I think adding
up the numbers for this should be done in the management layer.

I can possibly also imagine users wanting to count everything that's
even loosely connected to their /dev/vda, like copies of it. I doubt,
however, they want to count only copies that are currently being made,
but not snapshots and copies that have been completed earlier. So this
is clearly a management layer thing, too.

> But I think this is the special case, so this is what should be handled
> in a driver callback.

It's a special case, yes. But see below.

> >> If a quorum node has three children with allocated sizes of 3 MB, 1 MB,
> >> and 2 MB, respectively (totally possible if some have explicit zeroes
> >> and others don’t; it may also depend on the protocol, the filesystem,
> >> etc.), then I think it makes most sense to report indeed 6 MB for the
> >> quorum subtree as a whole.  What would you report?  3 MB?
> > 
> > Do it's the quorum way: Just vote!
> Add an option for it?  Average, maximum, median, majority, sum? :-)

We could also introduce a mode with an Electoral College so that
sometimes an image that missed the majority has a chance to win anyway.

> > No, you're right, of course. -ENOTSUP is probably the only other thing
> > you could do then.
> > 
> >>> Ehm... Maybe I should just take back what I said first. It almost feels
> >>> like it would be better if qcow2 and vmdk explicitly used a handler that
> >>> counts all children (could still be a generic one in block.c) rather
> >>> than having to remember to disable the functionality everywhere where we
> >>> don't want to have it.
> >>
> >> I don’t, because everywhere we don’t want this functionality, we still
> >> need to choose a child.  This has to be done by the driver anyway.
> > 
> > Well, by default the primary child, which should cover like 90% of the
> > drivers?
> Hm, yes.
> But I still think that the drivers that do not want to count every
> single non-COW child are the exception.

They are, but drivers that want to count more than their primary node
are exceptions, too. And I think you're more likely to remember adding
the callback when you want to have a certain feature, not when you don't
want to have it.

I really think we're likely to forget adding the callback where we need
to disable the feature.

I can see two options that should address both of our views:

1. Just don't have a fallback at all, make the callback mandatory and
   provide implementations in block.c that can be referred to in
   BlockDriver. Not specifying the callback causes an assertion failure,
   so we'd hopefully notice it quite early (assuming that we run either
   'qemu-img info' or 'query-block' on a configuration with the block
   driver, but I think that's faily safe to assume).

2. Make the 90% solution a 100% solution: Allow drivers to have multiple
   storage children (for vmdk) and then have the fallback add up the
   primary child plus all storage children. This is what I suggested as
   the documented semantics in my initial reply to this patch (that you
   chose not to answer).

   Adding the size of storage children covers qcow2 and vmdk.

   As the job filter won't declare the target or any other involved
   nodes their storage nodes (I hope), this will do the right thing for
   them, too.

   For quorum and blkverify both ways could be justifiable. I think they
   probably shouldn't declare their children as storage nodes. They are
   more like filters that don't have a single filtered node. So some
   kind of almost-filters.


Attachment: signature.asc
Description: PGP signature

reply via email to

[Prev in Thread] Current Thread [Next in Thread]