qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] This patch adds a new block driver : iSCSI


From: Mark Wu
Subject: Re: [Qemu-devel] [PATCH] This patch adds a new block driver : iSCSI
Date: Fri, 23 Sep 2011 17:15:52 +0800
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.22) Gecko/20110904 Red Hat/3.1.14-1.el6_1 Thunderbird/3.1.14

I tested this patch with the following command:
x86_64-softmmu/qemu-system-x86_64 --enable-kvm rhel54_1.img -m 1024 -net tap,ifname=tap0,script=no -net nic,model=virtio -sdl -drive file=iscsi://127.0.0.1/iqn.2011-09.com.example:server.target1/ And I found that the whole qemu process would get freezed, not reachable via ping and no response on desktop if there's I/O targeted to the iscsi drive and the iscsi target was forcefully stopped. After checking the backtrace with gdb, I found the I/O thread got stuck on the mutex qemu_global_mutex , which was hold by the vcpu thread. It should be released before re-entering guest. But the vcpu thread was waiting for the completion of iscsi aio request endlessly, and therefore couldn't get chance to release the mutex. So the whole qemu process became unresponsive. But this problem doesn't exist with the combination of virtio and iscsi. Only the I/O process got hung on guest in this case. It's more acceptable. I am not sure how to fix this problem.


gdb backtrace:

(gdb) info threads
2 Thread 0x7fa0fdd4c700 (LWP 5086) 0x0000003a868de383 in select () from /lib64/libc.so.6 * 1 Thread 0x7fa0fdd4d740 (LWP 5085) 0x0000003a8700dfe4 in __lll_lock_wait () from /lib64/libpthread.so.0
(gdb) bt
#0  0x0000003a8700dfe4 in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x0000003a87009318 in _L_lock_854 () from /lib64/libpthread.so.0
#2  0x0000003a870091e7 in pthread_mutex_lock () from /lib64/libpthread.so.0
#3 0x00000000004c9819 in qemu_mutex_lock (mutex=<value optimized out>) at qemu-thread-posix.c:54 #4 0x00000000004a46c6 in main_loop_wait (nonblocking=<value optimized out>) at /home/mark/Work/source/qemu/vl.c:1545 #5 0x00000000004a60d6 in main_loop (argc=<value optimized out>, argv=<value optimized out>, envp=<value optimized out>) at /home/mark/Work/source/qemu/vl.c:1579 #6 main (argc=<value optimized out>, argv=<value optimized out>, envp=<value optimized out>) at /home/mark/Work/source/qemu/vl.c:3574
(gdb) t 2
[Switching to thread 2 (Thread 0x7fa0fdd4c700 (LWP 5086))]#0 0x0000003a868de383 in select () from /lib64/libc.so.6
(gdb) bt
#0  0x0000003a868de383 in select () from /lib64/libc.so.6
#1  0x00000000004096aa in qemu_aio_wait () at aio.c:193
#2  0x0000000000409815 in qemu_aio_flush () at aio.c:113
#3 0x00000000004761ea in bmdma_cmd_writeb (bm=0x1db2230, val=8) at /home/mark/Work/source/qemu/hw/ide/pci.c:311 #4 0x0000000000555900 in access_with_adjusted_size (addr=0, value=0x7fa0fdd4bdb8, size=1, access_size_min=<value optimized out>, access_size_max=<value optimized out>, access= 0x555820 <memory_region_write_accessor>, opaque=0x1db2370) at /home/mark/Work/source/qemu/memory.c:284 #5 0x0000000000555ae1 in memory_region_iorange_write (iorange=<value optimized out>, offset=<value optimized out>, width=<value optimized out>, data=8) at /home/mark/Work/source/qemu/memory.c:425 #6 0x000000000054eda1 in kvm_handle_io (env=0x192e080) at /home/mark/Work/source/qemu/kvm-all.c:834 #7 kvm_cpu_exec (env=0x192e080) at /home/mark/Work/source/qemu/kvm-all.c:976 #8 0x000000000052cc1a in qemu_kvm_cpu_thread_fn (arg=0x192e080) at /home/mark/Work/source/qemu/cpus.c:656
#9  0x0000003a870077e1 in start_thread () from /lib64/libpthread.so.0
#10 0x0000003a868e577d in clone () from /lib64/libc.so.6




reply via email to

[Prev in Thread] Current Thread [Next in Thread]