qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [Qemu-devel] [PATCH] block: Let bdrv_drain_all() to cal


From: Alexander Yarygin
Subject: Re: [Qemu-block] [Qemu-devel] [PATCH] block: Let bdrv_drain_all() to call aio_poll() for each AioContext
Date: Thu, 14 May 2015 13:57:32 +0300
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/25.0.50 (gnu/linux)

Fam Zheng <address@hidden> writes:

> On Wed, 05/13 19:34, Alexander Yarygin wrote:
>> Paolo Bonzini <address@hidden> writes:
>> 
>> > On 13/05/2015 17:18, Alexander Yarygin wrote:
>> >> After the commit 9b536adc ("block: acquire AioContext in
>> >> bdrv_drain_all()") the aio_poll() function got called for every
>> >> BlockDriverState, in assumption that every device may have its own
>> >> AioContext. The bdrv_drain_all() function is called in each
>> >> virtio_reset() call,
>> >
>> > ... which should actually call bdrv_drain().  Can you fix that?
>> >
>> 
>> I thought about it, but couldn't come to conclusion that it's safe. The
>> comment above bdrv_drain_all() states "... it is not possible to have a
>> function to drain a single device's I/O queue.",
>
> I think that comment is stale - it predates the introduction of per BDS req
> tracking and bdrv_drain.
>

It says "completion of an asynchronous I/O operation can trigger any
number of other I/O operations on other devices". If this is no longer
the case, then I agree :). But I think it doesn't exclude this
patch anyway: bdrv_drain_all() is called in other places as well,
e.g. in do_vm_stop().

>> besides that what if we
>> have several virtual disks that share host file?
>
> I'm not sure what you mean, bdrv_drain works on a BDS, each virtual disk has
> one of which.
>
>> Or I'm wrong and it's ok to do?
>> 
>> >> which in turn is called for every virtio-blk
>> >> device on initialization, so we got aio_poll() called
>> >> 'length(device_list)^2' times.
>> >> 
>> >> If we have thousands of disks attached, there are a lot of
>> >> BlockDriverStates but only a few AioContexts, leading to tons of
>> >> unnecessary aio_poll() calls. For example, startup times with 1000 disks
>> >> takes over 13 minutes.
>> >> 
>> >> This patch changes the bdrv_drain_all() function allowing it find shared
>> >> AioContexts and to call aio_poll() only for unique ones. This results in
>> >> much better startup times, e.g. 1000 disks do come up within 5 seconds.
>> >
>> > I'm not sure this patch is correct.  You may have to call aio_poll
>> > multiple times before a BlockDriverState is drained.
>> >
>> > Paolo
>> >
>> 
>> 
>> Ah, right. We need second loop, something like this:
>> 
>> @@ -2030,20 +2033,33 @@ void bdrv_drain(BlockDriverState *bs)
>>  void bdrv_drain_all(void)
>>  {
>>      /* Always run first iteration so any pending completion BHs run */
>> -    bool busy = true;
>> +    bool busy = true, pending = false;
>>      BlockDriverState *bs;
>> +    GList *aio_ctxs = NULL, *ctx;
>> +    AioContext *aio_context;
>> 
>>      while (busy) {
>>          busy = false;
>> 
>>          QTAILQ_FOREACH(bs, &bdrv_states, device_list) {
>> -            AioContext *aio_context = bdrv_get_aio_context(bs);
>> +            aio_context = bdrv_get_aio_context(bs);
>> 
>>              aio_context_acquire(aio_context);
>>              busy |= bdrv_drain_one(bs);
>>              aio_context_release(aio_context);
>> +            if (!aio_ctxs || !g_list_find(aio_ctxs, aio_context))
>> +                aio_ctxs = g_list_append(aio_ctxs, aio_context);
>
> Braces are required even for single line if. Moreover, I don't understand this
> - aio_ctxs is a duplicate of bdrv_states.
>
> Fam
>
>

length(bdrv_states) == amount of virtual disks
length(aio_ctxs) == amount of threads

We can get as many disks as we want, while amount of threads is
limited. In my case there were 1024 disks sharing one AioContext that
gives overhead at least in 1023 calls of aio_poll(). 

[.. skipped ..]




reply via email to

[Prev in Thread] Current Thread [Next in Thread]