qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [Qemu-devel] [PATCH v3 2/2] virtio-blk: Use blk_drain()


From: Markus Armbruster
Subject: Re: [Qemu-block] [Qemu-devel] [PATCH v3 2/2] virtio-blk: Use blk_drain() to drain IO requests
Date: Mon, 29 Jun 2015 08:10:20 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/24.3 (gnu/linux)

Alexander Yarygin <address@hidden> writes:

> Markus Armbruster <address@hidden> writes:
>
>> Just spotted this in my git-pull...
>>
>> Alexander Yarygin <address@hidden> writes:
>>
>>> Each call of the virtio_blk_reset() function calls blk_drain_all(),
>>> which works for all existing BlockDriverStates, while draining only
>>> one is needed.
>>>
>>> This patch replaces blk_drain_all() by blk_drain() in
>>> virtio_blk_reset(). virtio_blk_data_plane_stop() should be called
>>> after draining because it restores vblk->complete_request.
>>>
>>> Cc: "Michael S. Tsirkin" <address@hidden>
>>> Cc: Christian Borntraeger <address@hidden>
>>> Cc: Cornelia Huck <address@hidden>
>>> Cc: Kevin Wolf <address@hidden>
>>> Cc: Paolo Bonzini <address@hidden>
>>> Cc: Stefan Hajnoczi <address@hidden>
>>> Signed-off-by: Alexander Yarygin <address@hidden>
>>> ---
>>>  hw/block/virtio-blk.c | 15 ++++++++++-----
>>>  1 file changed, 10 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
>>> index e6afe97..d8a906f 100644
>>> --- a/hw/block/virtio-blk.c
>>> +++ b/hw/block/virtio-blk.c
>>> @@ -651,16 +651,21 @@ static void virtio_blk_dma_restart_cb(void *opaque, 
>>> int running,
>>>  static void virtio_blk_reset(VirtIODevice *vdev)
>>>  {
>>>      VirtIOBlock *s = VIRTIO_BLK(vdev);
>>> -
>>> -    if (s->dataplane) {
>>> -        virtio_blk_data_plane_stop(s->dataplane);
>>> -    }
>>> +    AioContext *ctx;
>>>  
>>>      /*
>>>       * This should cancel pending requests, but can't do nicely until there
>>>       * are per-device request lists.
>>>       */
>>> -    blk_drain_all();
>>> +    ctx = blk_get_aio_context(s->blk);
>>> +    aio_context_acquire(ctx);
>>> +    blk_drain(s->blk);
>>> +
>>> +    if (s->dataplane) {
>>> +        virtio_blk_data_plane_stop(s->dataplane);
>>> +    }
>>> +    aio_context_release(ctx);
>>> +
>>>      blk_set_enable_write_cache(s->blk, s->original_wce);
>>>  }
>>
>> From bdrv_drain_all()'s comment:
>>
>>  * Note that completion of an asynchronous I/O operation can trigger any
>>  * number of other I/O operations on other devices---for example a coroutine
>>  * can be arbitrarily complex and a constant flow of I/O can come until the
>>  * coroutine is complete.  Because of this, it is not possible to have a
>>  * function to drain a single device's I/O queue.
>>
>> From bdrv_drain()'s comment:
>>
>>  * See the warning in bdrv_drain_all().  This function can only be called if
>>  * you are sure nothing can generate I/O because you have op blockers
>>  * installed.
>>
>> blk_drain() and blk_drain_all() are trivial wrappers.
>>
>> Ignorant questions:
>>
>> * Why does blk_drain() suffice here?
>>
>> * Is blk_drain() (created in PATCH 1) even a safe interface?
>
> * We want to drain requests from only one bdrv and blk_drain() can do
>   that.

It's never been a question of not wanting to drain just one device, it's
been a problem of it not working.  But point taken.

> * Ignorant answer: I was told that the bdrv_drain_all()'s comment is
>   obsolete and we can use bdrv_drain(). Here is a link to the old
>   thread: http://marc.info/?l=qemu-devel&m=143154211017926&w=2.

Kevin, Stefan, if the comment has become wrong, it needs to be redone.
Who's going to take care of it?

>                                                                 Since I
>   don't see the full picture of this area yet, I'm just relying on other
>   people's opinion.

That's fair, we all do :)



reply via email to

[Prev in Thread] Current Thread [Next in Thread]