qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 6/6] nbd/server: introduce NBDClient->lock to protect fiel


From: Stefan Hajnoczi
Subject: Re: [PATCH v2 6/6] nbd/server: introduce NBDClient->lock to protect fields
Date: Thu, 21 Dec 2023 14:21:06 -0500

On Thu, Dec 21, 2023 at 06:38:16PM +0100, Kevin Wolf wrote:
> Am 21.12.2023 um 16:35 hat Stefan Hajnoczi geschrieben:
> > NBDClient has a number of fields that are accessed by both the export
> > AioContext and the main loop thread. When the AioContext lock is removed
> > these fields will need another form of protection.
> > 
> > Add NBDClient->lock and protect fields that are accessed by both
> > threads. Also add assertions where possible and otherwise add doc
> > comments stating assumptions about which thread and lock holding.
> > 
> > Note this patch moves the client->recv_coroutine assertion from
> > nbd_co_receive_request() to nbd_trip() where client->lock is held.
> > 
> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> 
> > +/* Runs in export AioContext */
> > +static void nbd_wake_read_bh(void *opaque)
> > +{
> > +    NBDClient *client = opaque;
> > +    qio_channel_wake_read(client->ioc);
> > +}
> > +
> >  static bool nbd_drained_poll(void *opaque)
> >  {
> >      NBDExport *exp = opaque;
> >      NBDClient *client;
> >  
> > +    assert(qemu_in_main_thread());
> > +
> >      QTAILQ_FOREACH(client, &exp->clients, next) {
> > -        if (client->nb_requests != 0) {
> > -            /*
> > -             * If there's a coroutine waiting for a request on 
> > nbd_read_eof()
> > -             * enter it here so we don't depend on the client to wake it 
> > up.
> > -             */
> > -            if (client->recv_coroutine != NULL && client->read_yielding) {
> > -                qio_channel_wake_read(client->ioc);
> > +        WITH_QEMU_LOCK_GUARD(&client->lock) {
> > +            if (client->nb_requests != 0) {
> > +                /*
> > +                 * If there's a coroutine waiting for a request on 
> > nbd_read_eof()
> > +                 * enter it here so we don't depend on the client to wake 
> > it up.
> > +                 *
> > +                 * Schedule a BH in the export AioContext to avoid missing 
> > the
> > +                 * wake up due to the race between qio_channel_wake_read() 
> > and
> > +                 * qio_channel_yield().
> > +                 */
> > +                if (client->recv_coroutine != NULL && 
> > client->read_yielding) {
> > +                    
> > aio_bh_schedule_oneshot(nbd_export_aio_context(client->exp),
> > +                                            nbd_wake_read_bh, client);
> > +                }
> 
> Doesn't the condition have to move inside the BH to avoid the race?
> 
> Checking client->recv_coroutine != NULL could work here because I don't
> think it can go from NULL to something while we're quiescing, but
> client->read_yielding can still change until the BH runs and we know
> that the nbd_co_trip() coroutine has yielded. It seems easiest to just
> move the whole condition to the BH.

I will add aio_wait_kick() into nbd_read_eof() immediately after setting
client->read_yielding. That way nbd_drained_poll() re-runs after
client->read_yielding is set to true.

Stefan

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]