[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH for-7.1 5/8] nbd: use a QemuMutex to synchronize reconnection
From: |
Eric Blake |
Subject: |
Re: [PATCH for-7.1 5/8] nbd: use a QemuMutex to synchronize reconnection with coroutines |
Date: |
Wed, 13 Apr 2022 10:55:34 -0500 |
User-agent: |
NeoMutt/20211029-6-a115bf |
On Tue, Apr 12, 2022 at 09:42:01PM +0200, Paolo Bonzini wrote:
> The condition for waiting on the s->free_sema queue depends on
> both s->in_flight and s->state. The latter is currently using
> atomics, but this is quite dubious and probably wrong.
>
> Because s->state is written in the main thread too, for example by
> the reconnect timer callback, it cannot be protected by a CoMutex.
> Introduce a separate lock that can be used by nbd_co_send_request();
> later on this lock will also be used for s->state. There will not
> be any contention on the lock unless there is a reconnect, so this
> is not performance sensitive.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> block/nbd.c | 46 +++++++++++++++++++++++++++-------------------
> 1 file changed, 27 insertions(+), 19 deletions(-)
>
> diff --git a/block/nbd.c b/block/nbd.c
> index 0ff41cb914..c908ea6ae3 100644
> --- a/block/nbd.c
> +++ b/block/nbd.c
> @@ -72,17 +72,22 @@ typedef struct BDRVNBDState {
> QIOChannel *ioc; /* The current I/O channel */
> NBDExportInfo info;
>
> - CoMutex send_mutex;
> + /*
> + * Protects free_sema, in_flight, requests[].coroutine,
> + * reconnect_delay_timer.
> + */
> + QemuMutex requests_lock;
> CoQueue free_sema;
> -
> - CoMutex receive_mutex;
> int in_flight;
> + NBDClientRequest requests[MAX_NBD_REQUESTS];
> + QEMUTimer *reconnect_delay_timer;
> +
> + CoMutex send_mutex;
> + CoMutex receive_mutex;
> NBDClientState state;
>
> - QEMUTimer *reconnect_delay_timer;
> QEMUTimer *open_timer;
>
> - NBDClientRequest requests[MAX_NBD_REQUESTS];
Reordering of the struct makes sense.
> @@ -468,11 +473,10 @@ static int coroutine_fn
> nbd_co_send_request(BlockDriverState *bs,
> BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
> int rc, i = -1;
>
> - qemu_co_mutex_lock(&s->send_mutex);
> -
> + qemu_mutex_lock(&s->requests_lock);
> while (s->in_flight == MAX_NBD_REQUESTS ||
> (!nbd_client_connected(s) && s->in_flight > 0)) {
> - qemu_co_queue_wait(&s->free_sema, &s->send_mutex);
> + qemu_co_queue_wait(&s->free_sema, &s->requests_lock);
> }
>
> s->in_flight++;
> @@ -493,14 +497,14 @@ static int coroutine_fn
> nbd_co_send_request(BlockDriverState *bs,
> }
> }
>
> - g_assert(qemu_in_coroutine());
Why is this assert dropped? Is it because we've marked the function
with coroutine_fn? If so, should we drop it earlier in the series,
when you added the label?
Otherwise, the patch makes sense to me.
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization: qemu.org | libvirt.org
[PATCH for-7.1 6/8] nbd: move s->state under requests_lock, Paolo Bonzini, 2022/04/12
[PATCH for-7.1 7/8] nbd: take receive_mutex when reading requests[].receiving, Paolo Bonzini, 2022/04/12
[PATCH for-7.1 8/8] nbd: document what is protected by the CoMutexes, Paolo Bonzini, 2022/04/12