[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v2 6/6] nbd/server: introduce NBDClient->lock to protect fiel
From: |
Kevin Wolf |
Subject: |
Re: [PATCH v2 6/6] nbd/server: introduce NBDClient->lock to protect fields |
Date: |
Thu, 21 Dec 2023 18:38:16 +0100 |
Am 21.12.2023 um 16:35 hat Stefan Hajnoczi geschrieben:
> NBDClient has a number of fields that are accessed by both the export
> AioContext and the main loop thread. When the AioContext lock is removed
> these fields will need another form of protection.
>
> Add NBDClient->lock and protect fields that are accessed by both
> threads. Also add assertions where possible and otherwise add doc
> comments stating assumptions about which thread and lock holding.
>
> Note this patch moves the client->recv_coroutine assertion from
> nbd_co_receive_request() to nbd_trip() where client->lock is held.
>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> +/* Runs in export AioContext */
> +static void nbd_wake_read_bh(void *opaque)
> +{
> + NBDClient *client = opaque;
> + qio_channel_wake_read(client->ioc);
> +}
> +
> static bool nbd_drained_poll(void *opaque)
> {
> NBDExport *exp = opaque;
> NBDClient *client;
>
> + assert(qemu_in_main_thread());
> +
> QTAILQ_FOREACH(client, &exp->clients, next) {
> - if (client->nb_requests != 0) {
> - /*
> - * If there's a coroutine waiting for a request on nbd_read_eof()
> - * enter it here so we don't depend on the client to wake it up.
> - */
> - if (client->recv_coroutine != NULL && client->read_yielding) {
> - qio_channel_wake_read(client->ioc);
> + WITH_QEMU_LOCK_GUARD(&client->lock) {
> + if (client->nb_requests != 0) {
> + /*
> + * If there's a coroutine waiting for a request on
> nbd_read_eof()
> + * enter it here so we don't depend on the client to wake it
> up.
> + *
> + * Schedule a BH in the export AioContext to avoid missing
> the
> + * wake up due to the race between qio_channel_wake_read()
> and
> + * qio_channel_yield().
> + */
> + if (client->recv_coroutine != NULL && client->read_yielding)
> {
> +
> aio_bh_schedule_oneshot(nbd_export_aio_context(client->exp),
> + nbd_wake_read_bh, client);
> + }
Doesn't the condition have to move inside the BH to avoid the race?
Checking client->recv_coroutine != NULL could work here because I don't
think it can go from NULL to something while we're quiescing, but
client->read_yielding can still change until the BH runs and we know
that the nbd_co_trip() coroutine has yielded. It seems easiest to just
move the whole condition to the BH.
> + return true;
> }
> -
> - return true;
> }
> }
The rest looks good to me.
Kevin
- [PATCH v2 0/6] qemu-iotests fixes for Kevin's block tree, Stefan Hajnoczi, 2023/12/21
- [PATCH v2 1/6] fixup block-coroutine-wrapper: use qemu_get_current_aio_context(), Stefan Hajnoczi, 2023/12/21
- [PATCH v2 2/6] fixup block: remove AioContext locking, Stefan Hajnoczi, 2023/12/21
- [PATCH v2 3/6] fixup scsi: only access SCSIDevice->requests from one thread, Stefan Hajnoczi, 2023/12/21
- [PATCH v2 4/6] nbd/server: avoid per-NBDRequest nbd_client_get/put(), Stefan Hajnoczi, 2023/12/21
- [PATCH v2 6/6] nbd/server: introduce NBDClient->lock to protect fields, Stefan Hajnoczi, 2023/12/21
- Re: [PATCH v2 6/6] nbd/server: introduce NBDClient->lock to protect fields,
Kevin Wolf <=
- [PATCH v2 5/6] nbd/server: only traverse NBDExport->clients from main loop thread, Stefan Hajnoczi, 2023/12/21