qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] block/stream: Drain subtree around graph change


From: Emanuele Giuseppe Esposito
Subject: Re: [PATCH] block/stream: Drain subtree around graph change
Date: Tue, 5 Apr 2022 20:24:31 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.2.0


Am 05/04/2022 um 19:53 schrieb Emanuele Giuseppe Esposito:
> 
> 
> Am 05/04/2022 um 17:04 schrieb Kevin Wolf:
>> Am 05.04.2022 um 15:09 hat Emanuele Giuseppe Esposito geschrieben:
>>> Am 05/04/2022 um 12:14 schrieb Kevin Wolf:
>>>> I think all of this is really relevant for Emanuele's work, which
>>>> involves adding AIO_WAIT_WHILE() deep inside graph update functions. I
>>>> fully expect that we would see very similar problems, and just stacking
>>>> drain sections over drain sections that might happen to usually fix
>>>> things, but aren't guaranteed to, doesn't look like a good solution.
>>>
>>> Yes, I think at this point we all agreed to drop subtree_drain as
>>> replacement for AioContext.
>>>
>>> The alternative is what Paolo proposed in the other thread " Removal of
>>> AioContext lock, bs->parents and ->children: proof of concept"
>>> I am not sure which thread you replied first :)
>>
>> This one, I think. :-)
>>
>>> I think that proposal is not far from your idea, and it avoids to
>>> introduce or even use drains at all.
>>> Not sure why you called it a "step backwards even from AioContext locks".
>>
>> I was only referring to the lock locality there. AioContext locks are
>> really coarse, but still a finer granularity than a single global lock.
>>
>> In the big picture, it's still be better than the AioContext lock, but
>> that's because it's a different type of lock, not because it has better
>> locality.
>>
>> So I was just wondering if we can't have the different type of lock and
>> make it local to the BDS, too.
> 
> I guess this is the right time to discuss this.
> 
> I think that a global lock will be easier to handle, and we already have
> a concrete implementation (cpus-common).
> 
> I think that the reads in some sense are already BDS-specific, because
> each BDS that is reading has an internal a flag.
> Writes, on the other hand, are global. If a write is happening, no other
> read at all can run, even if it has nothing to do with it.
> 
> The question then is: how difficult would be to implement a BDS-specific
> write?
> From the API prospective, change
> bdrv_graph_wrlock(void);
> into
> bdrv_graph_wrlock(BlockDriverState *parent, BlockDriverState *child);
> I am not sure if/how complicated it will be. For sure all the global
> variables would end up in the BDS struct.
> 
> On the other side, also making instead read generic could be interesting.
> Think about drain: it is a recursive function, and it doesn't really
> make sense to take the rdlock for each node it traverses.

Otherwise a simple solution for drains that require no change at allis
to just take the rdlock on the bs calling drain, and since each write
waits for all reads to complete, it will work anyways.

The only detail is that assert_bdrv_graph_readable() will then need to
iterate through all nodes to be sure that at leas one of them is
actually reading.

So yeah I know this might be hard to realize without an implementation,
but my conclusion is to leave the lock as it is for now.

> Even though I don't know an easy way to replace ->has_waiter and
> ->reading_graph flags...
> 
> Emanuele
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]