qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] block/stream: Drain subtree around graph change


From: Kevin Wolf
Subject: Re: [PATCH] block/stream: Drain subtree around graph change
Date: Tue, 5 Apr 2022 16:41:08 +0200

Am 05.04.2022 um 14:12 hat Vladimir Sementsov-Ogievskiy geschrieben:
> Thanks Kevin! I have already run out of arguments in the battle
> against using subtree-drains to isolate graph modification operations
> from each other in different threads in the mailing list)
> 
> (Note also, that the top-most version of this patch is "[PATCH v2]
> block/stream: Drain subtree around graph change")

Oops, I completely missed the v2. Thanks!

> About avoiding polling during graph-modifying operations, there is a
> problem: some IO operations are involved into block-graph modifying
> operations. At least it's rewriting "backing_file_offset" and
> "backing_file_size" fields in qcow2 header.
> 
> We can't just separate rewriting metadata from graph modifying
> operation: this way another graph-modifying operation may interleave
> and we'll write outdated metadata.

Hm, generally we don't update image metadata when we reconfigure the
graph. Most changes are temporary (like insertion of filter nodes) and
the image header only contains a "default configuration" to be used on
the next start.

There are only a few places that update the image header; I think it's
generally block job completions. They obviously update the in-memory
graph, too, but they don't write to the image file (and therefore
potentially poll) in the middle of updating the in-memory graph, but
they do both in separate steps.

I think this is okay. We must just avoid polling in the middle of graph
updates because if something else changes the graph there, it's not
clear any more that we're really doing what the caller had in mind.

> So I still think, we need a kind of global lock for graph modifying
> operations. Or a kind per-BDS locks as you propose. But in this case
> we need to be sure that taking all needed per-BDS locks we'll avoid
> deadlocking.

I guess this depends on the exact granularity of the locks we're using.
If you take the lock only while updating a single edge, I don't think
you could easily deadlock. If you hold it for more complex operations,
it becomes harder to tell without checking the code.

Kevin




reply via email to

[Prev in Thread] Current Thread [Next in Thread]