qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Qemu block filter insertion/removal API


From: Vladimir Sementsov-Ogievskiy
Subject: Re: Qemu block filter insertion/removal API
Date: Fri, 21 May 2021 21:32:59 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.10.2

19.05.2021 17:14, Vladimir Sementsov-Ogievskiy wrote:
19.05.2021 16:02, Kevin Wolf wrote:
Am 19.05.2021 um 14:19 hat Vladimir Sementsov-Ogievskiy geschrieben:
19.05.2021 14:44, Kevin Wolf wrote:
Am 17.05.2021 um 14:44 hat Vladimir Sementsov-Ogievskiy geschrieben:
Hi all!

I'd like to be sure that we know where we are going to.

In blockdev-era where qemu user is aware about block nodes, all nodes have good 
names and controlled by user we can efficiently use block filters.

We already have some useful filters: copy-on-read, throttling, compress. In my 
parallel series I make backup-top filter public and useful without backup block 
jobs. But now filters could be inserted only together with opening their child. 
We can specify filters in qemu cmdline, or filter can take place in the block 
node chain created by blockdev-add.

Still, it would be good to insert/remove filters on demand.

Currently we are going to use x-blockdev-reopen for this. Still it can't be used to 
insert a filter above root node (as x-blockdev-reopen can change only block node options 
and their children). In my series "[PATCH 00/21] block: publish backup-top 
filter" I propose (as Kevin suggested) to modify qom-set, so that it can set drive 
option of running device. That's not difficult, but it means that we have different 
scenario of inserting/removing filters:

1. filter above root node X:

inserting:

    - do blockdev-add to add a filter (and specify X as its child)
    - do qom-set to set new filter as a rood node instead of X

removing

    - do qom-set to make X a root node again
    - do blockdev-del to drop a filter

2. filter between two block nodes P and X. (For example, X is a backing child 
of P)

inserting

    - do blockdev-add to add a filter (and specify X as its child)
    - do blockdev-reopen to set P.backing = filter

remvoing

    - do blockdev-reopen to set P.backing = X
    - do blockdev-del to drop a filter


And, probably we'll want transaction support for all these things.


Is it OK? Or do we need some kind of additional blockdev-replace command, that 
can replace one node by another, so in both cases we will do

inserting:
    - blockdev-add filter
    - blockdev-replace (make all parents of X to point to the new filter 
instead (except for the filter itself of course)

removing
    - blockdev-replace (make all parante of filter to be parents of X instead)
    - blockdev-del filter

It's simple to implement, and it seems for me that it is simpler to use. Any 
thoughts?

One reason I remember why we didn't decide to go this way in the many
"dynamic graph reconfiguration" discussions we had, is that it's not
generic enough to cover all cases. But I'm not sure if we ever
considered root nodes as a separate case. I acknowledge that having two
different interfaces is inconvenient, and integrating qom-set in a
transaction is rather unlikely to happen.

The reason why it's not generic is that it restricts you to doing the
same thing for all parents. Imagine this:

                      +- virtio-blk
                      |
      file <- qcow2 <-+
                      |
                      +- NBD export

Now you want to throttle the NBD export so that it doesn't interfere
with your VM too much. With your simple blockdev-replace this isn't
possible. You would have to add the filter to both users or to none.

In theory, blockdev-replace could take a list of the edges that should
be changed to the new node. The problem is that edges don't have names,
and even the parents don't necessarily have one (and if they do, they
are in separate namespaces, so a BlockBackend, a job and an export could
all have the same name), so finding a good way to refer to them in QMP
doesn't sound trivial.


Hm. I like the idea. And it seems feasible to me:

Both export and block jobs works through BlockBackend.

So, for block-jobs, we can add optional parameters like
source-blk-name, and target-blk-name. If parameters specified, blk's
will be named, and user will be able to do blockdev-replace.

I'm not sure if giving them a name is a good idea. Wouldn't it make the
BlockBackend accessible for the user who could then make a device use
it?

For export it's a bit trickier: it would be strange to add separate
argument for export blk, as export already has id. So, I'd do the
following:

1. make blk named (with same name as the export itself) iff name does
    not conflict with other blks
2. deprecate duplicating existing blk names by export name.

Yes, if we decide that giving them a name is a good idea, it's possible,
but still a change that requires deprecation, as you say.

The third one is devices (which is what I actually meant when I said
BlockBackend), which also have anonymous BlockBackends in the -blockdev
world. The same approach could work, but it would essentially mean
unifying the QOM and the block namespace, which sounds more likely to
produce conflicts than exports.

Then, blockdev-replace take a parents list, where parent is either
node-name or blk name.

Note that you need both a node-name and a child name to unambiguously
identify an edge.

I guess you could do something like the following, it's just a bit
verbose:

{ 'enum': 'BlockEdgeParentType',
   'data': ['node', 'device', 'export', 'job'] }

Interesting that it intersects with @XDbgBlockGraphNodeType.. I'm looking at it, and see that BDS node is 
called "block-driver" not "block-node". That's because the idea was that we have 
block-graph, where all parents are included, therefore they are "nodes" as well.


{ 'struct': 'BlockEdgeNode',
   'data': { 'node-name': 'str', 'child-name': 'str' } }
{ 'struct': 'BlockEdgeDevice', 'data': { 'qdev': 'str' } }
{ 'struct': 'BlockEdgeExport', 'data': { 'id': 'str' } }
{ 'struct': 'BlockEdgeJob',
   'data': { 'id': 'str',
             'role': '...some enum...?' } }

{ 'union': 'BlockEdge',
   'base': { 'parent-type': 'BlockEdgeParentType' },
   'discriminator': 'parent-type',
   'data': {
       'block-node': 'BlockEdgeNode',
       'device': 'BlockEdgeDevice',
       'export': 'BlockEdgeExport',
       'job': 'BlockEdgeJob'
   } }

Maybe BlockEdgeJob (where the correct definition isn't obvious) is
actually unnecessary if we can make sure that jobs always go through
their filter instead of owning a BlockBackend. That's what they really
should be doing anyway.


I still think that block jobs may operate without filter in some cases.. But 
the schema looks good, I'll try.



Now I'm not sure is that a good idea. This forces user to handle different 
types of parents. So, user have to make an abstraction of block-parent, and 
handle all the types. Adding more types in future will lead to modifying this 
user logic..

What about the following compromise:

{ 'struct': 'GenericNode',
  'data': {
     'id': 'str',   # id maybe node-name or qdev-id or export-name or job-id
     '*type': 'BlockEdgeParentType'   # optional, may be omitted if @id is not 
ambigous
} }

{ 'struct': 'BlockEdge',
  'data': {
     'parent': 'GenericNode',
     '*name': 'str'  # optional, may be omitted if @parent has only one child 
(good for exports and block-devices)
} }


Then, if user of the interface cares to give different names to all generic 
nodes (jobs, block-nodes, devices and exports), he may use rather simple 
interface.

Then we can deprecate any duplication between qdev ids, node-names, job ids and 
export names.


--
Best regards,
Vladimir



reply via email to

[Prev in Thread] Current Thread [Next in Thread]