qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-block] virtual machine cpu soft lockup when qemu attach disk


From: l00284672
Subject: [Qemu-block] virtual machine cpu soft lockup when qemu attach disk
Date: Tue, 4 Jun 2019 16:53:36 +0800
User-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0


Hi,  I found a problem that virtual machine cpu soft lockup when I attach a disk to the vm in the case that

backend storage network has a large delay or IO pressure is too large.

1) The disk xml which I attached is:

    <disk type='block' device='lun' rawio='yes'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source dev='/dev/mapper/360022a11000c1e0a0787c23a000001cb'/>
      <backingStore/>
      <target dev='sdb' bus='scsi'/>
      <alias name='scsi0-0-1-0'/>
      <address type='drive' controller='0' bus='0' target='1' unit='0'/>
    </disk>

2) The bt of qemu main thread:

#0 0x0000ffff9d78402c in pread64 () from /lib64/libpthread.so.0
#1 0x0000aaaace3357d8 in pread64 (__offset=0, __nbytes=4096, __buf=0xaaaad47a5200, __fd=202) at /usr/include/bits/unistd.h:99
#2 raw_is_io_aligned (address@hidden, address@hidden, address@hidden) at block/raw_posix.c:294
#3 0x0000aaaace33597c in raw_probe_alignment (address@hidden, fd=202, address@hidden) at block/raw_posix.c:349
#4 0x0000aaaace335a48 in raw_refresh_limits (bs=0xaaaad32ea920, errp=0xfffffef7a330) at block/raw_posix.c:811
#5 0x0000aaaace3404b0 in bdrv_refresh_limits (bs=0xaaaad32ea920, errp=0xfffffef7a330, address@hidden) at block/io.c:122
#6 0x0000aaaace340504 in bdrv_refresh_limits (address@hidden, address@hidden) at block/io.c:97
#7 0x0000aaaace2eb9f0 in bdrv_open_common (address@hidden, address@hidden, options=<optimized out>, address@hidden)
at block.c:1194
#8 0x0000aaaace2eedec in bdrv_open_inherit (filename=<optimized out>, address@hidden "/dev/mapper/36384c4f100630193359db7a80000011d",
address@hidden, options=<optimized out>, address@hidden, flags=<optimized out>, address@hidden, address@hidden,
address@hidden, address@hidden) at block.c:1895
#9 0x0000aaaace2ef510 in bdrv_open (address@hidden "/dev/mapper/36384c4f100630193359db7a80000011d", address@hidden,
address@hidden, address@hidden, address@hidden) at block.c:1979
#10 0x0000aaaace331ef0 in blk_new_open (address@hidden "/dev/mapper/36384c4f100630193359db7a80000011d", address@hidden,
address@hidden, flags=128, address@hidden) at block/block_backend.c:213
#11 0x0000aaaace0da1f4 in blockdev_init (address@hidden "/dev/mapper/36384c4f100630193359db7a80000011d", address@hidden,
address@hidden) at blockdev.c:603
#12 0x0000aaaace0dc478 in drive_new (address@hidden, block_default_type=<optimized out>) at blockdev.c:1116
#13 0x0000aaaace0e3ee0 in add_init_drive (
address@hidden "file=/dev/mapper/36384c4f100630193359db7a80000011d,format=raw,if=none,id=drive-scsi0-0-0-3,cache=none,aio=native")
at device_hotplug.c:46
#14 0x0000aaaace0e3f78 in hmp_drive_add (mon=0xfffffef7a810, qdict=0xaaaad0c8f000) at device_hotplug.c:67
#15 0x0000aaaacdf7d688 in handle_hmp_command (mon=0xfffffef7a810, cmdline=<optimized out>) at /usr/src/debug/qemu-kvm-2.8.1/monitor.c:3199
#16 0x0000aaaacdf7d778 in qmp_human_monitor_command (
command_line=0xaaaacfc8e3c0 "drive_add dummy file=/dev/mapper/36384c4f100630193359db7a80000011d,format=raw,if=none,id=drive-scsi0-0-0-3,cache=none,aio=native",
has_cpu_index=false, cpu_index=0, address@hidden) at /usr/src/debug/qemu-kvm-2.8.1/monitor.c:660
#17 0x0000aaaace0fdb30 in qmp_marshal_human_monitor_command (args=<optimized out>, ret=0xfffffef7a9e0, errp=0xfffffef7a9d8) at qmp-marshal.c:2223
#18 0x0000aaaace3b6ad0 in do_qmp_dispatch (request=<optimized out>, errp=0xfffffef7aa20, address@hidden) at qapi/qmp_dispatch.c:115
#19 0x0000aaaace3b6d58 in qmp_dispatch (request=<optimized out>) at qapi/qmp_dispatch.c:142
#20 0x0000aaaacdf79398 in handle_qmp_command (parser=<optimized out>, tokens=<optimized out>) at /usr/src/debug/qemu-kvm-2.8.1/monitor.c:4010
#21 0x0000aaaace3bd6c0 in json_message_process_token (lexer=0xaaaacf834c80, input=<optimized out>, type=JSON_RCURLY, x=214, y=274) at qobject/json_streamer.c:105
#22 0x0000aaaace3f3d4c in json_lexer_feed_char (address@hidden, ch=<optimized out>, address@hidden) at qobject/json_lexer.c:319
#23 0x0000aaaace3f3e6c in json_lexer_feed (lexer=0xaaaacf834c80, buffer=<optimized out>, size=<optimized out>) at qobject/json_lexer.c:369
#24 0x0000aaaacdf77c64 in monitor_qmp_read (opaque=<optimized out>, buf=<optimized out>, size=<optimized out>) at /usr/src/debug/qemu-kvm-2.8.1/monitor.c:4040
#25 0x0000aaaace0eab18 in tcp_chr_read (chan=<optimized out>, cond=<optimized out>, opaque=0xaaaacf90b280) at qemu_char.c:3260
#26 0x0000ffff9dadf200 in g_main_context_dispatch () from /lib64/libglib-2.0.so.0
#27 0x0000aaaace3c4a00 in glib_pollfds_poll () at util/main_loop.c:230
--Type <RET> for more, q to quit, c to continue without paging--
#28 0x0000aaaace3c4a88 in os_host_main_loop_wait (timeout=<optimized out>) at util/main_loop.c:278
#29 0x0000aaaace3c4bf0 in main_loop_wait (nonblocking=<optimized out>) at util/main_loop.c:534
#30 0x0000aaaace0f5d08 in main_loop () at vl.c:2120
#31 0x0000aaaacdf3a770 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:5017

3)The bt of vcpu thread

From the bt we can see,  when do qmp sush as drive_add,  qemu main thread locks the qemu_global_mutex  and do pread in raw_probe_alignmen. Pread

is a synchronous operation. If backend storage network has a large delay or IO pressure is too large,  the pread operation will not return for a long time, which

make vcpu thread can't acquire qemu_global_mutex for a long time and make the vcpu thread unable to be scheduled for a long time.  So virtual machine cpu soft lockup happened.


I thank  qemu main thread should not hold qemu_global_mutex for a long time when do qmp that involving IO synchronous operation sush pread , ioctl, etc.

Do you have any solutions or good ideas about it? Thanks for your reply!





Attachment: lizhengui.vcf
Description: Vcard


reply via email to

[Prev in Thread] Current Thread [Next in Thread]