[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] Fwd: virtio_scsi_ctx_check failed when detach virtio_sc

From: l00284672
Subject: Re: [Qemu-block] Fwd: virtio_scsi_ctx_check failed when detach virtio_scsi disk
Date: Wed, 17 Jul 2019 17:45:42 +0800
User-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0

I reproduce it on qemu4.0.0 version again.  The bt is below:

(gdb) bt
#0  0x0000ffff86aacbd0 in raise () from /lib64/libc.so.6
#1  0x0000ffff86aadf7c in abort () from /lib64/libc.so.6
#2  0x0000ffff86aa6124 in __assert_fail_base () from /lib64/libc.so.6
#3  0x0000ffff86aa61a4 in __assert_fail () from /lib64/libc.so.6
#4  0x0000000000529118 in virtio_scsi_ctx_check (d=<optimized out>, s=<optimized out>, s=<optimized out>) at /home/qemu-4.0.0/hw/scsi/virtio-scsi.c:246 #5  0x0000000000529ec4 in virtio_scsi_handle_cmd_req_prepare (s=0x2779ec00, req=0xffff740397d0) at /home/qemu-4.0.0/hw/scsi/virtio-scsi.c:559 #6  0x000000000052a228 in virtio_scsi_handle_cmd_vq (s=0x2779ec00, vq=0xffff7c6d7110) at /home/qemu-4.0.0/hw/scsi/virtio-scsi.c:603 #7  0x000000000052afa8 in virtio_scsi_data_plane_handle_cmd (vdev=<optimized out>, vq=0xffff7c6d7110) at /home/qemu-4.0.0/hw/scsi/virtio-scsi-dataplane.c:59 #8  0x000000000054d94c in virtio_queue_host_notifier_aio_poll (opaque=<optimized out>) at /home/qemu-4.0.0/hw/virtio/virtio.c:2452
Backtrace stopped: previous frame identical to this frame (corrupt stack?)

The scsi controller is configured with iothread.  Hot unpluging the scsi disk may cause this problem if the disk is processing IO because the main thread and the iothread are in parallel.

On 2019/7/17 16:41, Kevin Wolf wrote:
Am 16.07.2019 um 04:06 hat l00284672 geschrieben:
-------- Forwarded Message --------
Subject:        virtio_scsi_ctx_check failed when detach virtio_scsi disk
Date:   Mon, 15 Jul 2019 23:34:24 +0800
From:   l00284672 <address@hidden>
To:     address@hidden, address@hidden, Stefan Hajnoczi
<address@hidden>, Paolo Bonzini <address@hidden>
CC:     address@hidden

I found a problem  that virtio_scsi_ctx_check  failed when detaching
virtio_scsi disk.  The  bt is below:

(gdb) bt
#0  0x0000ffffb02e1bd0 in raise () from /lib64/libc.so.6
#1  0x0000ffffb02e2f7c in abort () from /lib64/libc.so.6
#2  0x0000ffffb02db124 in __assert_fail_base () from /lib64/libc.so.6
#3  0x0000ffffb02db1a4 in __assert_fail () from /lib64/libc.so.6
#4  0x00000000004eb9a8 invirtio_scsi_ctx_check (d=d@entry=0xc70d790,
s=<optimized out>, s=<optimized out>)
     at /Images/lzg/code/710/qemu-2.8.1/hw/scsi/virtio-scsi.c:243
#5  0x00000000004ec87c in virtio_scsi_handle_cmd_req_prepare
(s=s@entry=0xd27a7a0, req=req@entry=0xafc4b90)
     at /Images/lzg/code/710/qemu-2.8.1/hw/scsi/virtio-scsi.c:553
#6  0x00000000004ecc20 in virtio_scsi_handle_cmd_vq (s=0xd27a7a0,
     at /Images/lzg/code/710/qemu-2.8.1/hw/scsi/virtio-scsi.c:588
#7  0x00000000004eda20 in virtio_scsi_data_plane_handle_cmd (vdev=0x0,
     at /Images/lzg/code/710/qemu-2.8.1/hw/scsi/virtio-scsi-dataplane.c:57
#8  0x0000000000877254 in aio_dispatch (ctx=0xac61010) at
#9  0x00000000008773ec in aio_poll (ctx=0xac61010, blocking=true) at
#10 0x00000000005cd7cc in iothread_run (opaque=0xac5e4b0) at iothread.c:49
#11 0x000000000087a8b8 in qemu_thread_start (args=0xac61360) at
#12 0x00000000008a04e8 in thread_entry_for_hotfix (pthread_cb=0x0) at
#13 0x0000ffffb041c8bc in start_thread () from /lib64/libpthread.so.0
#14 0x0000ffffb0382f8c in thread_start () from /lib64/libc.so.6

assert(blk_get_aio_context(d->conf.blk) == s->ctx) failed.

I think this patch 
introduce this problem.

commit a6f230c8d13a7ff3a0c7f1097412f44bfd9eff0b  move blockbackend back to
main AioContext on unplug. It set the AioContext of SCSIDevice  to the
main AioContex, but s->ctx is still the iothread AioContext.  Is this
a bug?
Yes, a failing assertion is always a bug.

The commit you mention doesn't really do anything wrong, because when
the device is unplugged, there shouldn't be any more requests that could
fail an assertion later. If anything, we could have a bug in making sure
that no requests are in flight any more during unplug, but this would be
a separate issue.

We fixed some AioContext related bugs recently. Which QEMU version did
you use when you ran into the bug? Can you try on current git master?



Attachment: lizhengui.vcf
Description: Vcard

reply via email to

[Prev in Thread] Current Thread [Next in Thread]