qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH v10 01/16] block: Pause all jobs during bdrv_reo


From: Kevin Wolf
Subject: Re: [Qemu-block] [PATCH v10 01/16] block: Pause all jobs during bdrv_reopen_multiple()
Date: Mon, 10 Oct 2016 17:37:56 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

Am 06.10.2016 um 15:02 hat Alberto Garcia geschrieben:
> When a BlockDriverState is about to be reopened it can trigger certain
> operations that need to write to disk. During this process a different
> block job can be woken up. If that block job completes and also needs
> to call bdrv_reopen() it can happen that it needs to do it on the same
> BlockDriverState that is still in the process of being reopened.
> 
> This can have fatal consequences, like in this example:
> 
>   1) Block job A starts and sleeps after a while.
>   2) Block job B starts and tries to reopen node1 (a qcow2 file).
>   3) Reopening node1 means flushing and replacing its qcow2 cache.
>   4) While the qcow2 cache is being flushed, job A wakes up.
>   5) Job A completes and reopens node1, replacing its cache.
>   6) Job B resumes, but the cache that was being flushed no longer
>      exists.
> 
> This patch pauses all block jobs during bdrv_reopen_multiple(), so
> that step 4 can never happen and the operation is safe.
> 
> Note that this scenario can only happen if both bdrv_reopen() calls
> are made by block jobs on the same backing chain. Otherwise there's no
> chance that the same BlockDriverState appears in both reopen queues.
> 
> Signed-off-by: Alberto Garcia <address@hidden>
> ---
>  block.c | 21 +++++++++++++++++++++
>  1 file changed, 21 insertions(+)
> 
> diff --git a/block.c b/block.c
> index bb1f1ec..c80b528 100644
> --- a/block.c
> +++ b/block.c
> @@ -2087,9 +2087,19 @@ int bdrv_reopen_multiple(BlockReopenQueue *bs_queue, 
> Error **errp)
>      int ret = -1;
>      BlockReopenQueueEntry *bs_entry, *next;
>      Error *local_err = NULL;
> +    BlockJob *job = NULL;
>  
>      assert(bs_queue != NULL);
>  
> +    /* Pause all block jobs */
> +    while ((job = block_job_next(job))) {
> +        AioContext *aio_context = blk_get_aio_context(job->blk);
> +
> +        aio_context_acquire(aio_context);
> +        block_job_pause(job);
> +        aio_context_release(aio_context);
> +    }
> +
>      bdrv_drain_all();

We already have a bdrv_drain_all() here, which does the same thing (and
more) internally, except that it resumes all jobs before it returns.
Maybe what we should do is split bdrv_drain_all() in a begin/end pair,
too.

If we don't split it, we'd have to do the "and more" part here as well,
disabling all other potential users of the BDSes. This would involve at
least calling bdrv_parent_drained_begin/end().

The other point I'm wondering now is whether bdrv_drain_all() should
have the aio_disable/enable_external() pair that bdrv_drain() has.

Paolo, any opinion?

>      QSIMPLEQ_FOREACH(bs_entry, bs_queue, entry) {
> @@ -2120,6 +2130,17 @@ cleanup:
>          g_free(bs_entry);
>      }
>      g_free(bs_queue);
> +
> +    /* Resume all block jobs */
> +    job = NULL;
> +    while ((job = block_job_next(job))) {
> +        AioContext *aio_context = blk_get_aio_context(job->blk);
> +
> +        aio_context_acquire(aio_context);
> +        block_job_resume(job);
> +        aio_context_release(aio_context);
> +    }
> +
>      return ret;
>  }

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]