[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH v4 01/17] block: stop relying on io_flush() in b
From: |
Paolo Bonzini |
Subject: |
Re: [Qemu-devel] [PATCH v4 01/17] block: stop relying on io_flush() in bdrv_drain_all() |
Date: |
Thu, 27 Jun 2013 15:13:19 +0200 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130514 Thunderbird/17.0.6 |
Il 14/06/2013 13:43, Stefan Hajnoczi ha scritto:
> If a block driver has no file descriptors to monitor but there are still
> active requests, it can return 1 from .io_flush(). This is used to spin
> during synchronous I/O.
>
> Stop relying on .io_flush() and instead check
> QLIST_EMPTY(&bs->tracked_requests) to decide whether there are active
> requests.
>
> This is the first step in removing .io_flush() so that event loops no
> longer need to have the concept of synchronous I/O. Eventually we may
> be able to kill synchronous I/O completely by running everything in a
> coroutine, but that is future work.
>
> Note this patch moves bs->throttled_reqs initialization to bdrv_new() so
> that bdrv_requests_pending(bs) can safely access it. In practice bs is
> g_malloc0() so the memory is already zeroed but it's safer to initialize
> the queue properly.
>
> In bdrv_delete() make sure to call bdrv_make_anon() *after* bdrv_close()
> so that the device is still seen by bdrv_drain_all() when iterating
> bdrv_states.
I wonder if this last change should be separated out and CCed to
qemu-stable. It seems like a bug if you close a device that has pending
throttled operations.
Paolo
> Signed-off-by: Stefan Hajnoczi <address@hidden>
> ---
> block.c | 50 +++++++++++++++++++++++++++++++++++++-------------
> 1 file changed, 37 insertions(+), 13 deletions(-)
>
> diff --git a/block.c b/block.c
> index 79ad33d..04821d8 100644
> --- a/block.c
> +++ b/block.c
> @@ -148,7 +148,6 @@ static void bdrv_block_timer(void *opaque)
>
> void bdrv_io_limits_enable(BlockDriverState *bs)
> {
> - qemu_co_queue_init(&bs->throttled_reqs);
> bs->block_timer = qemu_new_timer_ns(vm_clock, bdrv_block_timer, bs);
> bs->io_limits_enabled = true;
> }
> @@ -305,6 +304,7 @@ BlockDriverState *bdrv_new(const char *device_name)
> }
> bdrv_iostatus_disable(bs);
> notifier_list_init(&bs->close_notifiers);
> + qemu_co_queue_init(&bs->throttled_reqs);
>
> return bs;
> }
> @@ -1412,6 +1412,35 @@ void bdrv_close_all(void)
> }
> }
>
> +/* Check if any requests are in-flight (including throttled requests) */
> +static bool bdrv_requests_pending(BlockDriverState *bs)
> +{
> + if (!QLIST_EMPTY(&bs->tracked_requests)) {
> + return true;
> + }
> + if (!qemu_co_queue_empty(&bs->throttled_reqs)) {
> + return true;
> + }
> + if (bs->file && bdrv_requests_pending(bs->file)) {
> + return true;
> + }
> + if (bs->backing_hd && bdrv_requests_pending(bs->backing_hd)) {
> + return true;
> + }
> + return false;
> +}
> +
> +static bool bdrv_requests_pending_all(void)
> +{
> + BlockDriverState *bs;
> + QTAILQ_FOREACH(bs, &bdrv_states, list) {
> + if (bdrv_requests_pending(bs)) {
> + return true;
> + }
> + }
> + return false;
> +}
> +
> /*
> * Wait for pending requests to complete across all BlockDriverStates
> *
> @@ -1426,27 +1455,22 @@ void bdrv_close_all(void)
> */
> void bdrv_drain_all(void)
> {
> + /* Always run first iteration so any pending completion BHs run */
> + bool busy = true;
> BlockDriverState *bs;
> - bool busy;
> -
> - do {
> - busy = qemu_aio_wait();
>
> + while (busy) {
> /* FIXME: We do not have timer support here, so this is effectively
> * a busy wait.
> */
> QTAILQ_FOREACH(bs, &bdrv_states, list) {
> if (!qemu_co_queue_empty(&bs->throttled_reqs)) {
> qemu_co_queue_restart_all(&bs->throttled_reqs);
> - busy = true;
> }
> }
> - } while (busy);
>
> - /* If requests are still pending there is a bug somewhere */
> - QTAILQ_FOREACH(bs, &bdrv_states, list) {
> - assert(QLIST_EMPTY(&bs->tracked_requests));
> - assert(qemu_co_queue_empty(&bs->throttled_reqs));
> + busy = bdrv_requests_pending_all();
> + busy |= aio_poll(qemu_get_aio_context(), busy);
> }
> }
>
> @@ -1591,11 +1615,11 @@ void bdrv_delete(BlockDriverState *bs)
> assert(!bs->job);
> assert(!bs->in_use);
>
> + bdrv_close(bs);
> +
> /* remove from list, if necessary */
> bdrv_make_anon(bs);
>
> - bdrv_close(bs);
> -
> g_free(bs);
> }
>
>
- [Qemu-devel] [PATCH v4 00/17] aio: drop io_flush(), Stefan Hajnoczi, 2013/06/14
- [Qemu-devel] [PATCH v4 01/17] block: stop relying on io_flush() in bdrv_drain_all(), Stefan Hajnoczi, 2013/06/14
- Re: [Qemu-devel] [PATCH v4 01/17] block: stop relying on io_flush() in bdrv_drain_all(),
Paolo Bonzini <=
- [Qemu-devel] [PATCH v4 04/17] tests: adjust test-thread-pool to new aio_poll() semantics, Stefan Hajnoczi, 2013/06/14
- [Qemu-devel] [PATCH v4 06/17] block/curl: drop curl_aio_flush(), Stefan Hajnoczi, 2013/06/14
- [Qemu-devel] [PATCH v4 07/17] block/gluster: drop qemu_gluster_aio_flush_cb(), Stefan Hajnoczi, 2013/06/14
- [Qemu-devel] [PATCH v4 08/17] block/iscsi: drop iscsi_process_flush(), Stefan Hajnoczi, 2013/06/14
- [Qemu-devel] [PATCH v4 09/17] block/linux-aio: drop qemu_laio_completion_cb(), Stefan Hajnoczi, 2013/06/14
- [Qemu-devel] [PATCH v4 10/17] block/nbd: drop nbd_have_request(), Stefan Hajnoczi, 2013/06/14
- [Qemu-devel] [PATCH v4 11/17] block/rbd: drop qemu_rbd_aio_flush_cb(), Stefan Hajnoczi, 2013/06/14
- [Qemu-devel] [PATCH v4 02/17] dataplane/virtio-blk: check exit conditions before aio_poll(), Stefan Hajnoczi, 2013/06/14