qemu-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-discuss] apic recursive stack overflow


From: 尹杰
Subject: [Qemu-discuss] apic recursive stack overflow
Date: Tue, 28 Aug 2012 10:50:35 +0800

Hi all,

I'm testing how the scheduling order of threads (IO thread, tcg
thread, etc.) in qemu will impact the emulation. And I found that
there is a potential recursive stack overflow in the apic which
causes qemu segmentation fault. The stack frames in gdb are listed below:

(gdb) where -40
#130983 0x081ed9f4 in apic_set_irq (s=0x8aae6d0, vector_num=48,
trigger_mode=0) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/apic.c:434
#130984 0x081ecfd8 in apic_local_deliver (s=0x8aae6d0, vector=3) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/apic.c:182
#130985 0x081ed030 in apic_deliver_pic_intr (d=0x8aae6d0, level=1) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/apic.c:191
#130986 0x081ed8b5 in apic_update_irq (s=0x8aae6d0) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/apic.c:405
#130987 0x081ed9f4 in apic_set_irq (s=0x8aae6d0, vector_num=48,
trigger_mode=0) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/apic.c:434
#130988 0x081ecfd8 in apic_local_deliver (s=0x8aae6d0, vector=3) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/apic.c:182
#130989 0x081ed030 in apic_deliver_pic_intr (d=0x8aae6d0, level=1) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/apic.c:191
#130990 0x081ed8b5 in apic_update_irq (s=0x8aae6d0) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/apic.c:405
#130991 0x081ed9f4 in apic_set_irq (s=0x8aae6d0, vector_num=48,
trigger_mode=0) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/apic.c:434
#130992 0x081ecfd8 in apic_local_deliver (s=0x8aae6d0, vector=3) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/apic.c:182
#130993 0x081ed030 in apic_deliver_pic_intr (d=0x8aae6d0, level=1) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/apic.c:191
#130994 0x081ed8b5 in apic_update_irq (s=0x8aae6d0) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/apic.c:405
#130995 0x081ed9f4 in apic_set_irq (s=0x8aae6d0, vector_num=48,
trigger_mode=0) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/apic.c:434
#130996 0x081ecfd8 in apic_local_deliver (s=0x8aae6d0, vector=3) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/apic.c:182
#130997 0x081ed030 in apic_deliver_pic_intr (d=0x8aae6d0, level=1) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/apic.c:191
#130998 0x081ed8b5 in apic_update_irq (s=0x8aae6d0) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/apic.c:405
#130999 0x081ed9f4 in apic_set_irq (s=0x8aae6d0, vector_num=48,
trigger_mode=0) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/apic.c:434
#131000 0x081ecfd8 in apic_local_deliver (s=0x8aae6d0, vector=3) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/apic.c:182
---Type <return> to continue, or q <return> to quit---
#131001 0x081ed030 in apic_deliver_pic_intr (d=0x8aae6d0, level=1) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/apic.c:191
#131002 0x081ed8b5 in apic_update_irq (s=0x8aae6d0) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/apic.c:405
#131003 0x081ed9f4 in apic_set_irq (s=0x8aae6d0, vector_num=48,
trigger_mode=0) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/apic.c:434
#131004 0x081ecfd8 in apic_local_deliver (s=0x8aae6d0, vector=3) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/apic.c:182
#131005 0x081ed030 in apic_deliver_pic_intr (d=0x8aae6d0, level=1) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/apic.c:191
#131006 0x081ed8b5 in apic_update_irq (s=0x8aae6d0) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/apic.c:405
#131007 0x081ed9f4 in apic_set_irq (s=0x8aae6d0, vector_num=48,
trigger_mode=0) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/apic.c:434
#131008 0x081ecfd8 in apic_local_deliver (s=0x8aae6d0, vector=3) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/apic.c:182
#131009 0x081ed030 in apic_deliver_pic_intr (d=0x8aae6d0, level=1) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/apic.c:191
#131010 0x081ed8b5 in apic_update_irq (s=0x8aae6d0) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/apic.c:405
#131011 0x081ed9f4 in apic_set_irq (s=0x8aae6d0, vector_num=48,
trigger_mode=0) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/apic.c:434
#131012 0x081ecfd8 in apic_local_deliver (s=0x8aae6d0, vector=3) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/apic.c:182
#131013 0x081ed030 in apic_deliver_pic_intr (d=0x8aae6d0, level=1) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/apic.c:191
#131014 0x081ed8b5 in apic_update_irq (s=0x8aae6d0) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/apic.c:405
#131015 0x081ee2c0 in apic_get_interrupt (d=0x8aae6d0) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/apic.c:620
#131016 0x0829704c in cpu_get_pic_interrupt (env=0x8aa59c0) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/hw/pc.c:156
#131017 0x0821818e in cpu_x86_exec (env=0x8aa59c0) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/cpu-exec.c:389
#131018 0x0821ded2 in tcg_cpu_exec (env=0x8aa59c0) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/cpus.c:1033
---Type <return> to continue, or q <return> to quit---
#131019 0x0821dfe0 in tcg_exec_all () at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/cpus.c:1065
#131020 0x0821d5f3 in qemu_tcg_cpu_thread_fn (arg=0x8aa59c0) at
/home/cgos/Downloads/qemu/qemu-experi/qemu-1.0.1/cpus.c:780
#131021 0xb7ccfe99 in start_thread () from /lib/i386-linux-gnu/libpthread.so.0
#131022 0xb7ae173e in clone () from /lib/i386-linux-gnu/libc.so.6

This's a infinite loop (apic_update_irq -> apic_deliver_pic_intr ->
apic_local_deliver -> apic_set_irq -> apic_update_irq ...) and finally
casues qemu segmentation fault because the stack overflow. I wonder if
anybody could tell me how to avoid this ?

The version of qemu is 1.0.1. I started qemu with direct kernel
booting. The kernel version is 2.6.36.1. Both initrd and hda files are
created by myself.

Thanks!

Jie.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]