qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [RFC 1/5] block/nbd: Fix hang in .bdrv_close()


From: Kevin Wolf
Subject: Re: [Qemu-block] [RFC 1/5] block/nbd: Fix hang in .bdrv_close()
Date: Fri, 12 Jul 2019 13:23:18 +0200
User-agent: Mutt/1.11.3 (2019-02-01)

Am 12.07.2019 um 13:09 hat Max Reitz geschrieben:
> On 12.07.19 13:01, Kevin Wolf wrote:
> > Am 12.07.2019 um 12:47 hat Max Reitz geschrieben:
> >> On 12.07.19 11:24, Kevin Wolf wrote:
> >>> Am 11.07.2019 um 21:58 hat Max Reitz geschrieben:
> >>>> When nbd_close() is called from a coroutine, the connection_co never
> >>>> gets to run, and thus nbd_teardown_connection() hangs.
> >>>>
> >>>> This is because aio_co_enter() only puts the connection_co into the main
> >>>> coroutine's wake-up queue, so this main coroutine needs to yield and
> >>>> reschedule itself to let the connection_co run.
> >>>>
> >>>> Signed-off-by: Max Reitz <address@hidden>
> >>>> ---
> >>>>  block/nbd.c | 12 +++++++++++-
> >>>>  1 file changed, 11 insertions(+), 1 deletion(-)
> >>>>
> >>>> diff --git a/block/nbd.c b/block/nbd.c
> >>>> index 81edabbf35..b83b6cd43e 100644
> >>>> --- a/block/nbd.c
> >>>> +++ b/block/nbd.c
> >>>> @@ -135,7 +135,17 @@ static void 
> >>>> nbd_teardown_connection(BlockDriverState *bs)
> >>>>      qio_channel_shutdown(s->ioc,
> >>>>                           QIO_CHANNEL_SHUTDOWN_BOTH,
> >>>>                           NULL);
> >>>> -    BDRV_POLL_WHILE(bs, s->connection_co);
> >>>> +
> >>>> +    if (qemu_in_coroutine()) {
> >>>> +        /* Let our caller poll and just yield until connection_co is 
> >>>> done */
> >>>> +        while (s->connection_co) {
> >>>> +            aio_co_schedule(qemu_get_current_aio_context(),
> >>>> +                            qemu_coroutine_self());
> >>>> +            qemu_coroutine_yield();
> >>>> +        }
> >>>
> >>> Isn't this busy waiting? Why not let s->connection_co wake us up when
> >>> it's about to terminate instead of immediately rescheduling ourselves?
> >>
> >> Yes, it is busy waiting, but I didn’t find that bad.  The connection_co
> >> will be invoked in basically every iteration, and once there is no
> >> pending data, it will quit.
> >>
> >> The answer to “why not...” of course is because it’d be more complicated.
> >>
> >> But anyway.
> >>
> >> Adding a new function qemu_coroutine_run_after(target) that adds
> >> qemu_coroutine_self() to the given @target coroutine’s wake-up queue and
> >> then using that instead of scheduling works, too, yes.
> >>
> >> I don’t really like being responsible for coroutine code, though...
> >>
> >> (And maybe it’d be better to make it qemu_coroutine_yield_for(target),
> >> which does the above and then yields?)
> > 
> > Or just do something like this, which is arguably not only a fix for the
> > busy wait, but also a code simplification:
> 
> 1. Is that guaranteed to work?  What if data sneaks in, the
> connection_co handles that, and doesn’t wake up the teardown_co?  Will
> it be re-scheduled?

Then connection_co is buggy because we clearly requested that it
terminate. It is possible that it does so only after handling another
request, but this wouldn't be a problem. teardown_co would then just
sleep for a few cycles more until connection_co is done and reaches the
aio_co_wake() call.

> 2. I precisely didn’t want to do this because we have this functionality
> already in the form of Coroutine.co_queue_wakeup.  Why duplicate it here?

co_queue_wakeup contains coroutines to be run at the next yield point
(or termination), which may be when connection_co is actually done, but
it might also be earlier. My explicit aio_co_wake() at the end of
connection_co is guaranteed to run only when connection_co is done.

Kevin

> > diff --git a/block/nbd.c b/block/nbd.c
> > index b83b6cd43e..c061bd1bfc 100644
> > --- a/block/nbd.c
> > +++ b/block/nbd.c
> > @@ -61,6 +61,7 @@ typedef struct BDRVNBDState {
> >      CoMutex send_mutex;
> >      CoQueue free_sema;
> >      Coroutine *connection_co;
> > +    Coroutine *teardown_co;
> >      int in_flight;
> > 
> >      NBDClientRequest requests[MAX_NBD_REQUESTS];
> > @@ -137,12 +138,9 @@ static void nbd_teardown_connection(BlockDriverState 
> > *bs)
> >                           NULL);
> > 
> >      if (qemu_in_coroutine()) {
> > -        /* Let our caller poll and just yield until connection_co is done 
> > */
> > -        while (s->connection_co) {
> > -            aio_co_schedule(qemu_get_current_aio_context(),
> > -                            qemu_coroutine_self());
> > -            qemu_coroutine_yield();
> > -        }
> > +        /* just yield until connection_co is done */
> > +        s->teardown_co = qemu_coroutine_self();
> > +        qemu_coroutine_yield();
> >      } else {
> >          BDRV_POLL_WHILE(bs, s->connection_co);
> >      }
> > @@ -217,6 +215,9 @@ static coroutine_fn void nbd_connection_entry(void 
> > *opaque)
> >      bdrv_dec_in_flight(s->bs);
> > 
> >      s->connection_co = NULL;
> > +    if (s->teardown_co) {
> > +        aio_co_wake(s->teardown_co);
> > +    }
> >      aio_wait_kick();
> >  }
> > 
> 
> 



Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]