qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Qemu-stable] [PATCH for-2.9] thread-pool: add missing


From: Peter Lieven
Subject: Re: [Qemu-devel] [Qemu-stable] [PATCH for-2.9] thread-pool: add missing qemu_bh_cancel in completion function
Date: Thu, 16 Mar 2017 21:24:20 +0100

> Am 16.03.2017 um 17:18 schrieb Paolo Bonzini <address@hidden>:
> 
> 
> 
> On 16/03/2017 17:02, Peter Lieven wrote:
>> commit 3c80ca15 fixed a deadlock scenarion with nested aio_poll invocations.
>> 
>> However, the rescheduling of the completion BH introcuded unnecessary 
>> spinning
>> in the main-loop. On very fast file backends this can even lead to the
>> "WARNING: I/O thread spun for 1000 iterations" message popping up.
>> 
>> Callgrind reports about 3-4% less instructions with this patch running
>> qemu-img bench on a ramdisk based VMDK file.
>> 
>> Fixes: 3c80ca158c96ff902a30883a8933e755988948b1
>> Cc: address@hidden
>> Signed-off-by: Peter Lieven <address@hidden>
>> ---
>> util/thread-pool.c | 7 +++++++
>> 1 file changed, 7 insertions(+)
>> 
>> diff --git a/util/thread-pool.c b/util/thread-pool.c
>> index ce6cd30..610646d 100644
>> --- a/util/thread-pool.c
>> +++ b/util/thread-pool.c
>> @@ -188,6 +188,13 @@ restart:
>>             aio_context_release(pool->ctx);
>>             elem->common.cb(elem->common.opaque, elem->ret);
>>             aio_context_acquire(pool->ctx);
>> +
>> +            /* We can safely cancel the completion_bh here regardless of 
>> someone
>> +             * else having scheduled it meanwhile because we reenter the
>> +             * completion function anyway (goto restart).
>> +             */
>> +            qemu_bh_cancel(pool->completion_bh);
>> +
>>             qemu_aio_unref(elem);
>>             goto restart;
>>         } else {
>> 
> 
> Right, this is the same thing that block/linux-aio.c does.


Okay, so you also think its safe to do this? As far as I have seen you have 
been working a lot on the aio code recently.

Thanks,
Peter

reply via email to

[Prev in Thread] Current Thread [Next in Thread]