[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] qemu crash in coroutine bdrv_co_do_rw
From: |
Christian Borntraeger |
Subject: |
Re: [Qemu-devel] qemu crash in coroutine bdrv_co_do_rw |
Date: |
Tue, 10 Mar 2015 12:27:36 +0100 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.5.0 |
Am 10.03.2015 um 11:33 schrieb Paolo Bonzini:
>
>
> On 10/03/2015 08:54, Christian Borntraeger wrote:
>> Am 09.03.2015 um 21:37 schrieb Christian Borntraeger:
>>> Am 06.03.2015 um 18:23 schrieb Stefan Hajnoczi:
>>>> On Thu, Feb 26, 2015 at 10:29:57AM +0100, Christian Borntraeger wrote:
>>>>> is this some know issue? Under heavy load with lots of dataplane devices
>>>>> I sometimes get a segfault in the bdrc_co_do_rw routine:
>>>>>
>>>>> #0 bdrv_co_do_rw (opaque=0x0) at /home/cborntra/REPOS/qemu/block.c:4791
>>>>> 4791 if (!acb->is_write) {
>>>>> (gdb) bt
>>>>> #0 bdrv_co_do_rw (opaque=0x0) at /home/cborntra/REPOS/qemu/block.c:4791
>>>>> #1 0x00000000801aeb78 in coroutine_trampoline (i0=<optimized out>,
>>>>> i1=-725099072) at /home/cborntra/REPOS/qemu/coroutine-ucontext.c:80
>>>>> #2 0x000003fffbe1cca2 in __makecontext_ret () from /lib64/libc.so.6
>>>>> Backtrace stopped: previous frame identical to this frame (corrupt stack?)
>>>>> (gdb) up
>>>>> #1 0x00000000801aeb78 in coroutine_trampoline (i0=<optimized out>,
>>>>> i1=-725099072) at /home/cborntra/REPOS/qemu/coroutine-ucontext.c:80
>>>>> 80 co->entry(co->entry_arg);
>>>>> (gdb) print *co
>>>>> $1 = {entry = 0x801a3c28 <bdrv_co_do_rw>, entry_arg = 0x0, caller =
>>>>> 0x3ffe2fff788, pool_next = {sle_next = 0x3ffd2287990}, co_queue_wakeup =
>>>>> {tqh_first = 0x0,
>>>>> tqh_last = 0x3ffd4c7dde0}, co_queue_next = {tqe_next = 0x0, tqe_prev
>>>>> = 0x0}}
>>>>>
>>>>> As you can see enty_arg is 0, causing the problem. Do you have any quick
>>>>> idea before I start debugging?
>>>>
>>>> No, I haven't seen this bug before. Are you running qemu.git/master?
>>>>
>>>> Have you tried disabling the coroutine pool (freelist)?
>>>>
>>>> Stefan
>>>>
>>>
>>> I was able to increase the likelyhood of hitting this (more vCPUs, less
>>> guests).
>>>
>>> bisect thinks that this makes this shaky:
>>>
>>> 4d68e86bb10159099da0798f74e7512955f15eec is the first bad commit
>>> commit 4d68e86bb10159099da0798f74e7512955f15eec
>>> Author: Paolo Bonzini <address@hidden>
>>> Date: Tue Dec 2 12:05:48 2014 +0100
>>>
>>> coroutine: rewrite pool to avoid mutex
>>>
>>>
>>> Christian
>>>
>>
>> Yes, reverting these 3 makes the problem go away during an overnight run.
>
> Let's see if a quick hack helps isolate the problem (either in the
> lockless magic or in the algorithm itself):
>
> diff --git a/qemu-coroutine.c b/qemu-coroutine.c
> index 525247b..38e1a32 100644
> --- a/qemu-coroutine.c
> +++ b/qemu-coroutine.c
> @@ -29,6 +29,7 @@ static unsigned int release_pool_size;
> static __thread QSLIST_HEAD(, Coroutine) alloc_pool =
> QSLIST_HEAD_INITIALIZER(pool);
> static __thread unsigned int alloc_pool_size;
> static __thread Notifier coroutine_pool_cleanup_notifier;
> +static QemuMutex pool_lock;
>
> static void coroutine_pool_cleanup(Notifier *n, void *value)
> {
> @@ -59,8 +60,10 @@ Coroutine *qemu_coroutine_create(CoroutineEntry *entry)
> * release_pool_size and the actual size of release_pool.
> But
> * it is just a heuristic, it does not need to be perfect.
> */
> + qemu_mutex_lock(&pool_lock);
> alloc_pool_size = atomic_xchg(&release_pool_size, 0);
> QSLIST_MOVE_ATOMIC(&alloc_pool, &release_pool);
> + qemu_mutex_unlock(&pool_lock);
> co = QSLIST_FIRST(&alloc_pool);
> }
> }
> @@ -85,8 +88,10 @@ static void coroutine_delete(Coroutine *co)
>
> if (CONFIG_COROUTINE_POOL) {
> if (release_pool_size < POOL_BATCH_SIZE * 2) {
> + qemu_mutex_lock(&pool_lock);
> QSLIST_INSERT_HEAD_ATOMIC(&release_pool, co, pool_next);
> atomic_inc(&release_pool_size);
> + qemu_mutex_unlock(&pool_lock);
> return;
> }
> if (alloc_pool_size < POOL_BATCH_SIZE) {
>
>
That alone seems to help.