[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PULL 01/25] block: make BlockBackend->quiesce_counter atomic
From: |
Kevin Wolf |
Subject: |
[PULL 01/25] block: make BlockBackend->quiesce_counter atomic |
Date: |
Tue, 25 Apr 2023 15:13:35 +0200 |
From: Stefan Hajnoczi <stefanha@redhat.com>
The main loop thread increments/decrements BlockBackend->quiesce_counter
when drained sections begin/end. The counter is read in the I/O code
path. Therefore this field is used to communicate between threads
without a lock.
Acquire/release are not necessary because the BlockBackend->in_flight
counter already uses sequentially consistent accesses and running I/O
requests hold that counter when blk_wait_while_drained() is called.
qatomic_read() can be used.
Use qatomic_fetch_inc()/qatomic_fetch_dec() for modifications even
though sequentially consistent atomic accesses are not strictly required
here. They are, however, nicer to read than multiple calls to
qatomic_read() and qatomic_set(). Since beginning and ending drain is
not a hot path the extra cost doesn't matter.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20230307210427.269214-2-stefanha@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
block/block-backend.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/block/block-backend.c b/block/block-backend.c
index 5566ea059d..2ae768f24d 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -80,7 +80,7 @@ struct BlockBackend {
NotifierList remove_bs_notifiers, insert_bs_notifiers;
QLIST_HEAD(, BlockBackendAioNotifier) aio_notifiers;
- int quiesce_counter;
+ int quiesce_counter; /* atomic: written under BQL, read by other threads */
CoQueue queued_requests;
bool disable_request_queuing;
@@ -1057,7 +1057,7 @@ void blk_set_dev_ops(BlockBackend *blk, const BlockDevOps
*ops,
blk->dev_opaque = opaque;
/* Are we currently quiesced? Should we enforce this right now? */
- if (blk->quiesce_counter && ops && ops->drained_begin) {
+ if (qatomic_read(&blk->quiesce_counter) && ops && ops->drained_begin) {
ops->drained_begin(opaque);
}
}
@@ -1271,7 +1271,7 @@ static void coroutine_fn
blk_wait_while_drained(BlockBackend *blk)
{
assert(blk->in_flight > 0);
- if (blk->quiesce_counter && !blk->disable_request_queuing) {
+ if (qatomic_read(&blk->quiesce_counter) && !blk->disable_request_queuing) {
blk_dec_in_flight(blk);
qemu_co_queue_wait(&blk->queued_requests, NULL);
blk_inc_in_flight(blk);
@@ -2595,7 +2595,7 @@ static void blk_root_drained_begin(BdrvChild *child)
BlockBackend *blk = child->opaque;
ThrottleGroupMember *tgm = &blk->public.throttle_group_member;
- if (++blk->quiesce_counter == 1) {
+ if (qatomic_fetch_inc(&blk->quiesce_counter) == 0) {
if (blk->dev_ops && blk->dev_ops->drained_begin) {
blk->dev_ops->drained_begin(blk->dev_opaque);
}
@@ -2613,7 +2613,7 @@ static bool blk_root_drained_poll(BdrvChild *child)
{
BlockBackend *blk = child->opaque;
bool busy = false;
- assert(blk->quiesce_counter);
+ assert(qatomic_read(&blk->quiesce_counter));
if (blk->dev_ops && blk->dev_ops->drained_poll) {
busy = blk->dev_ops->drained_poll(blk->dev_opaque);
@@ -2624,12 +2624,12 @@ static bool blk_root_drained_poll(BdrvChild *child)
static void blk_root_drained_end(BdrvChild *child)
{
BlockBackend *blk = child->opaque;
- assert(blk->quiesce_counter);
+ assert(qatomic_read(&blk->quiesce_counter));
assert(blk->public.throttle_group_member.io_limits_disabled);
qatomic_dec(&blk->public.throttle_group_member.io_limits_disabled);
- if (--blk->quiesce_counter == 0) {
+ if (qatomic_fetch_dec(&blk->quiesce_counter) == 1) {
if (blk->dev_ops && blk->dev_ops->drained_end) {
blk->dev_ops->drained_end(blk->dev_opaque);
}
--
2.40.0
- [PULL 00/25] Block layer patches, Kevin Wolf, 2023/04/25
- [PULL 01/25] block: make BlockBackend->quiesce_counter atomic,
Kevin Wolf <=
- [PULL 03/25] block: protect BlockBackend->queued_requests with a lock, Kevin Wolf, 2023/04/25
- [PULL 04/25] block: don't acquire AioContext lock in bdrv_drain_all(), Kevin Wolf, 2023/04/25
- [PULL 08/25] hmp: convert handle_hmp_command() to AIO_WAIT_WHILE_UNLOCKED(), Kevin Wolf, 2023/04/25
- [PULL 09/25] monitor: convert monitor_cleanup() to AIO_WAIT_WHILE_UNLOCKED(), Kevin Wolf, 2023/04/25
- [PULL 02/25] block: make BlockBackend->disable_request_queuing atomic, Kevin Wolf, 2023/04/25
- [PULL 06/25] block: convert bdrv_graph_wrlock() to AIO_WAIT_WHILE_UNLOCKED(), Kevin Wolf, 2023/04/25
- [PULL 10/25] include/block: fixup typos, Kevin Wolf, 2023/04/25
- [PULL 07/25] block: convert bdrv_drain_all_begin() to AIO_WAIT_WHILE_UNLOCKED(), Kevin Wolf, 2023/04/25
- [PULL 11/25] block: add missing coroutine_fn to bdrv_sum_allocated_file_size(), Kevin Wolf, 2023/04/25
- [PULL 22/25] tests: mark more coroutine_fns, Kevin Wolf, 2023/04/25