[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [RFC 1/5] block/nbd: Fix hang in .bdrv_close()
From: |
Max Reitz |
Subject: |
Re: [Qemu-devel] [RFC 1/5] block/nbd: Fix hang in .bdrv_close() |
Date: |
Fri, 12 Jul 2019 13:44:43 +0200 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.7.2 |
On 12.07.19 13:23, Kevin Wolf wrote:
> Am 12.07.2019 um 13:09 hat Max Reitz geschrieben:
>> On 12.07.19 13:01, Kevin Wolf wrote:
>>> Am 12.07.2019 um 12:47 hat Max Reitz geschrieben:
>>>> On 12.07.19 11:24, Kevin Wolf wrote:
>>>>> Am 11.07.2019 um 21:58 hat Max Reitz geschrieben:
>>>>>> When nbd_close() is called from a coroutine, the connection_co never
>>>>>> gets to run, and thus nbd_teardown_connection() hangs.
>>>>>>
>>>>>> This is because aio_co_enter() only puts the connection_co into the main
>>>>>> coroutine's wake-up queue, so this main coroutine needs to yield and
>>>>>> reschedule itself to let the connection_co run.
>>>>>>
>>>>>> Signed-off-by: Max Reitz <address@hidden>
>>>>>> ---
>>>>>> block/nbd.c | 12 +++++++++++-
>>>>>> 1 file changed, 11 insertions(+), 1 deletion(-)
>>>>>>
>>>>>> diff --git a/block/nbd.c b/block/nbd.c
>>>>>> index 81edabbf35..b83b6cd43e 100644
>>>>>> --- a/block/nbd.c
>>>>>> +++ b/block/nbd.c
>>>>>> @@ -135,7 +135,17 @@ static void
>>>>>> nbd_teardown_connection(BlockDriverState *bs)
>>>>>> qio_channel_shutdown(s->ioc,
>>>>>> QIO_CHANNEL_SHUTDOWN_BOTH,
>>>>>> NULL);
>>>>>> - BDRV_POLL_WHILE(bs, s->connection_co);
>>>>>> +
>>>>>> + if (qemu_in_coroutine()) {
>>>>>> + /* Let our caller poll and just yield until connection_co is
>>>>>> done */
>>>>>> + while (s->connection_co) {
>>>>>> + aio_co_schedule(qemu_get_current_aio_context(),
>>>>>> + qemu_coroutine_self());
>>>>>> + qemu_coroutine_yield();
>>>>>> + }
>>>>>
>>>>> Isn't this busy waiting? Why not let s->connection_co wake us up when
>>>>> it's about to terminate instead of immediately rescheduling ourselves?
>>>>
>>>> Yes, it is busy waiting, but I didn’t find that bad. The connection_co
>>>> will be invoked in basically every iteration, and once there is no
>>>> pending data, it will quit.
>>>>
>>>> The answer to “why not...” of course is because it’d be more complicated.
>>>>
>>>> But anyway.
>>>>
>>>> Adding a new function qemu_coroutine_run_after(target) that adds
>>>> qemu_coroutine_self() to the given @target coroutine’s wake-up queue and
>>>> then using that instead of scheduling works, too, yes.
>>>>
>>>> I don’t really like being responsible for coroutine code, though...
>>>>
>>>> (And maybe it’d be better to make it qemu_coroutine_yield_for(target),
>>>> which does the above and then yields?)
>>>
>>> Or just do something like this, which is arguably not only a fix for the
>>> busy wait, but also a code simplification:
>>
>> 1. Is that guaranteed to work? What if data sneaks in, the
>> connection_co handles that, and doesn’t wake up the teardown_co? Will
>> it be re-scheduled?
>
> Then connection_co is buggy because we clearly requested that it
> terminate.
Did we? This would be done by setting s->quit to true, which isn’t
explicitly done here.
I thought it worked by just waking up the coroutine until it doesn’t
receive anything anymore, because the connection is closed. Now I don’t
know whether QIO_CHANNEL_SHUTDOWN_BOTH discards data that has been
received before the channel is closed. I don’t expect it to.
> It is possible that it does so only after handling another
> request, but this wouldn't be a problem. teardown_co would then just
> sleep for a few cycles more until connection_co is done and reaches the
> aio_co_wake() call.
I don’t quite understand, because the fact how connection_co would
proceed after handling another request was exactly my question. If it
were to yield and not to wake up, it would never be done.
But I’ve followed nbd_receive_reply() now, and I suppose nbd_read_eof()
will simply never yield after we have invoked qio_channel_shutdown().
>> 2. I precisely didn’t want to do this because we have this functionality
>> already in the form of Coroutine.co_queue_wakeup. Why duplicate it here?
>
> co_queue_wakeup contains coroutines to be run at the next yield point
> (or termination), which may be when connection_co is actually done, but
> it might also be earlier.
Yes, if it handles another request, that would be the yield point at the
end of its main loop.
But that would be solved by simply putting the yield_for() in a loop,
like the one that already exists for the non-coroutine case with
BDRV_POLL_WHILE().
> My explicit aio_co_wake() at the end of
> connection_co is guaranteed to run only when connection_co is done.
I can’t explain it, it feels off to me to add this field to BDRVNBDState
and let connection_co handle it just for this special case. It seems to
me that if we had a yield_for() function, we could simply use it instead
-- and I’d prefer a generic function over a special-case implementation.
I do understand that “it feels off” is not a reasonable justification
for doing anything.
Max
signature.asc
Description: OpenPGP digital signature
- [Qemu-devel] [RFC 0/5] block: Generic file truncation/creation fallbacks, Max Reitz, 2019/07/11
- [Qemu-devel] [RFC 1/5] block/nbd: Fix hang in .bdrv_close(), Max Reitz, 2019/07/11
- Re: [Qemu-devel] [RFC 1/5] block/nbd: Fix hang in .bdrv_close(), Kevin Wolf, 2019/07/12
- Re: [Qemu-devel] [RFC 1/5] block/nbd: Fix hang in .bdrv_close(), Max Reitz, 2019/07/12
- Re: [Qemu-devel] [RFC 1/5] block/nbd: Fix hang in .bdrv_close(), Kevin Wolf, 2019/07/12
- Re: [Qemu-devel] [RFC 1/5] block/nbd: Fix hang in .bdrv_close(), Max Reitz, 2019/07/12
- Re: [Qemu-devel] [RFC 1/5] block/nbd: Fix hang in .bdrv_close(), Kevin Wolf, 2019/07/12
- Re: [Qemu-devel] [RFC 1/5] block/nbd: Fix hang in .bdrv_close(),
Max Reitz <=
- Re: [Qemu-devel] [RFC 1/5] block/nbd: Fix hang in .bdrv_close(), Kevin Wolf, 2019/07/12
[Qemu-devel] [RFC 3/5] block: Fall back to fallback truncate function, Max Reitz, 2019/07/11
[Qemu-devel] [RFC 2/5] block: Generic truncation fallback, Max Reitz, 2019/07/11