[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PULL 09/14] qmp: Move dispatcher to a coroutine
From: |
Kevin Wolf |
Subject: |
Re: [PULL 09/14] qmp: Move dispatcher to a coroutine |
Date: |
Tue, 13 Oct 2020 13:29:48 +0200 |
Am 13.10.2020 um 09:56 hat Philippe Mathieu-Daudé geschrieben:
> On Mon, Oct 12, 2020 at 8:57 PM Ben Widawsky <ben@bwidawsk.net> wrote:
> >
> > On 20-10-12 16:02:34, Alex Bennée wrote:
> > >
> > > Kevin Wolf <kwolf@redhat.com> writes:
> > >
> > > > Am 12.10.2020 um 13:53 hat Philippe Mathieu-Daudé geschrieben:
> > > >> On 10/12/20 1:25 PM, Kevin Wolf wrote:
> > > >> > Am 12.10.2020 um 12:47 hat Alex Bennée geschrieben:
> > > >> > >
> > > >> > > Markus Armbruster <armbru@redhat.com> writes:
> > > >> > >
> > > >> > > > From: Kevin Wolf <kwolf@redhat.com>
> > > >> > > >
> > > >> > > > This moves the QMP dispatcher to a coroutine and runs all QMP
> > > >> > > > command
> > > >> > > > handlers that declare 'coroutine': true in coroutine context so
> > > >> > > > they
> > > >> > > > can avoid blocking the main loop while doing I/O or waiting for
> > > >> > > > other
> > > >> > > > events.
> > > >> > >
> > > >> > > This subtly changes the replay behaviour leading to a hang in:
> > > >> > >
> > > >> > > 10:55:18 [alex.bennee@hackbox2:~/l/q/b/bisect]
> > > >> > > (625581c2…)|✚1(+1/-1) + ./tests/venv/bin/avocado run
> > > >> > > tests/acceptance/replay_kernel.py:ReplayKernel.test_arm_virt
> > > >> > > Fetching asset from
> > > >> > > tests/acceptance/replay_kernel.py:ReplayKernel.test_arm_virt
> > > >> > > JOB ID : ec11fd2544f06e6c0d421f16afa757b49f7ed734
> > > >> > > JOB LOG :
> > > >> > > /home/alex.bennee/avocado/job-results/job-2020-10-12T11.40-ec11fd2/job.log
> > > >> > > (1/1)
> > > >> > > tests/acceptance/replay_kernel.py:ReplayKernel.test_arm_virt:
> > > >> > > ERROR: Could not perform graceful shutdown (26.27 s)
> > > >> > > RESULTS : PASS 0 | ERROR 1 | FAIL 0 | SKIP 0 | WARN 0 |
> > > >> > > INTERRUPT 0 | CANCEL 0
> > > >> > > JOB TIME : 27.77 s
> > > >> > >
> > > >> > > Looking at the log:
> > > >> > >
> > > >> > > 2020-10-12 11:40:31,426 __init__ L0085 DEBUG| [
> > > >> > > 3.887411] rtc-pl031 9010000.pl031: setting system clock to
> > > >> > > 2020-10-12 10:40:31 UTC (1602499231)
> > > >> > > 2020-10-12 11:40:31,428 __init__ L0085 DEBUG| [
> > > >> > > 3.887431] sr_init: No PMIC hook to init smartreflex
> > > >> > > 2020-10-12 11:40:31,447 __init__ L0085 DEBUG| [
> > > >> > > 3.897193] uart-pl011 9000000.pl011: no DMA platform data
> > > >> > > 2020-10-12 11:40:31,460 __init__ L0085 DEBUG| [
> > > >> > > 3.897242] md: Waiting for all devices to be available before
> > > >> > > autodetect
> > > >> > > 2020-10-12 11:40:31,462 __init__ L0085 DEBUG| [
> > > >> > > 3.897259] md: If you don't use raid, use raid=noautodetect
> > > >> > > 2020-10-12 11:40:31,475 __init__ L0085 DEBUG| [
> > > >> > > 3.897819] md: Autodetecting RAID arrays.
> > > >> > > 2020-10-12 11:40:31,476 __init__ L0085 DEBUG| [
> > > >> > > 3.897832] md: autorun ...
> > > >> > > 2020-10-12 11:40:31,477 __init__ L0085 DEBUG| [
> > > >> > > 3.897842] md: ... autorun DONE.
> > > >> > > 2020-10-12 11:40:31,483 __init__ L0085 DEBUG| [
> > > >> > > 3.897962] VFS: Cannot open root device "(null)" or
> > > >> > > unknown-block(0,0): error -6
> > > >> > > 2020-10-12 11:40:31,483 qmp L0245 DEBUG| >>>
> > > >> > > {'execute': 'quit'}
> > > >> > > 2020-10-12 11:40:31,495 qmp L0145 DEBUG| <<<
> > > >> > > {'timestamp': {'seconds': 1602499231, 'microseconds': 493379},
> > > >> > > 'event': 'SHUTDOWN', 'data': {'guest': True, 'reason':
> > > >> > > 'guest-reset'}}
> > > >> > > 2020-10-12 11:40:31,733 machine L0325 WARNI| qemu
> > > >> > > received signal 6; command: "./qemu-system-arm -display none -vga
> > > >> > > none -chardev
> > > >> > > socket,id=mon,path=/var/tmp/tmpzls53khe/qemu-8487-monitor.sock
> > > >> > > -mon chardev=mon,mode=control -machine virt -chardev
> > > >> > > socket,id=console,path=/var/tmp/tmpzls53khe/qemu-8487-console.sock,server,nowait
> > > >> > > -serial chardev:console -icount
> > > >> > > shift=1,rr=record,rrfile=/var/tmp/avocado_n00stdrf/avocado_job_aw60qdul/1-tests_acceptance_replay_kernel.py_ReplayKernel.test_arm_virt/replay.bin
> > > >> > > -kernel
> > > >> > > /home/alex.bennee/avocado/data/cache/by_location/62750ce9e069e69e6a7ff04ff54c382ee660b92a/vmlinuz
> > > >> > > -append printk.time=1 panic=-1 console=ttyAMA0 -net none
> > > >> > > -no-reboot"
> > > >> >
> > > >> > This looks like a crash (SIGABRT) rather than a hang. Do you have a
> > > >> > stack trace for the crashed process?
> > > >>
> > > >> No crash, exit(0):
> > > >
> > > > Why does the log say "qemu received signal 6" then?
> > > >
> > > >> VFS: Cannot open root device "(null)" or unknown-block(0,0): error -6
> > > >
> > > > Alex has this error in the logs before this commit, so I assume this is
> > > > expected. All of the following is then probably expected, too, because
> > > > it follows directly from this error:
> > > >
> > > >> Please append a correct "root=" boot option; here are the available
> > > >> partitions:
> > > >> Kernel panic - not syncing: VFS: Unable to mount root fs on
> > > >> unknown-block(0,0)
> > > >> CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.18.16-300.fc29.armv7hl #1
> > > >> Hardware name: Generic DT based system
> > > >> [<c0313f7c>] (unwind_backtrace) from [<c030dc64>]
> > > >> (show_stack+0x20/0x24)
> > > >> [<c030dc64>] (show_stack) from [<c0b50ec4>] (dump_stack+0x88/0xa8)
> > > >> [<c0b50ec4>] (dump_stack) from [<c03592f8>] (panic+0xd4/0x26c)
> > > >> [<c03592f8>] (panic) from [<c110183c>] (mount_block_root+0x250/0x2ec)
> > > >> [<c110183c>] (mount_block_root) from [<c1101950>]
> > > >> (mount_root+0x78/0x90)
> > > >> [<c1101950>] (mount_root) from [<c1101ac4>]
> > > >> (prepare_namespace+0x15c/0x19c)
> > > >> [<c1101ac4>] (prepare_namespace) from [<c11012e8>]
> > > >> (kernel_init_freeable+0x2c0/0x370)
> > > >> [<c11012e8>] (kernel_init_freeable) from [<c0b63914>]
> > > >> (kernel_init+0x18/0x128)
> > > >> [<c0b63914>] (kernel_init) from [<c03010e8>] (ret_from_fork+0x14/0x2c)
> > > >> Exception stack(0xc790bfb0 to 0xc790bff8)
> > > >> bfa0: 00000000 00000000 00000000
> > > >> 00000000
> > > >> bfc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000
> > > >> 00000000
> > > >> bfe0: 00000000 00000000 00000000 00000000 00000013 00000000
> > > >>
> > > >> -> PSCI call
> > > >> -> QEMU_PSCI_0_2_FN_SYSTEM_RESET
> > > >> -> SHUTDOWN_CAUSE_GUEST_RESET
> > > >> -> exit(0)
> > >
> > > Yes - the test is recording the kernel up until the point it gives up.
> > >
> > > > Hm... So you're saying that the test sends a 'quit' QMP command, but
> > > > before it could be processed, the guest causes QEMU to exit, so the test
> > > > will never receive a reply to its request?
> > > >
> > > > If commit 9ce44e2ce2 changes anything about this, it seems to me that it
> > > > would be that more QMP commands are processed during monitor_cleanup()
> > > > because it doesn't just delete the dispatcher BH, but waits until it's
> > > > not busy any more.
> > > >
> > > > Looking at this code again, however, the order in monitor_cleanup() is
> > > > probably wrong. We should first shut down the dispatcher (which may
> > > > still be using Monitor objects) and then destroy the monitors. This
> > > > could possibly explain a crash, but probably not wrong results with a
> > > > successful shutdown.
> > >
> > > I suspect this is a race between QEMU shutting down because the guest
> > > shut it down and the acceptance test trying to shut things down via QMP.
> > > I think the proper fix is either to:
> >
> > I'm not sure my problem is the same, but I do have the same symptom.
> >
> > >
> > > - s/panic=-1/panic=0/ in the command line (and probably drop
> > > --no-reboot)
> > >
> > > which would let the acceptance test cleanly shutdown via QMP.
> >
> > I tried this, which works well on some of the platforms which were failing.
> > - KERNEL_COMMON_COMMAND_LINE = 'printk.time=1 panic=-1 '
> > + KERNEL_COMMON_COMMAND_LINE = 'printk.time=1 panic=-0 '
> >
> > def run_vm(self, kernel_path, kernel_command_line, console_pattern,
> > record, shift, args, replay_path):
> > @@ -47,8 +47,8 @@ class ReplayKernel(LinuxKernelTest):
> > (shift, mode, replay_path),
> > '-kernel', kernel_path,
> > '-append', kernel_command_line,
> > - '-net', 'none',
> > - '-no-reboot')
> > + '-net', 'none'
> > + )
> > if args:
> > vm.add_args(*args)
> > vm.launch()
> > @@ -154,7 +154,7 @@ class ReplayKernel(LinuxKernelTest):
> > kernel_command_line = (self.KERNEL_COMMON_COMMAND_LINE +
> > 'console=ttyS0,115200 '
> > 'usbcore.nousb '
> > - 'panic=-1 noreboot')
> > + 'panic=-0')
> > console_pattern = 'Boot successful.'
> >
> > >
> > > - modify the test to declare that qemu will shutdown itself and
> > > therefor no "quit" needs to be sent
> > >
> > > WDYT?
>
> Can you send it as a formal patch please? :)
Please don't "fix" the test case when the problem is a QEMU bug.
I reproduced the bug myself now to fill in the missing information and
this is how it crashes:
(gdb) bt
#0 0x00007fe541cf4bc5 in raise () at /lib64/libc.so.6
#1 0x00007fe541cdd8a4 in abort () at /lib64/libc.so.6
#2 0x000055c24e965327 in error_exit (err=16, msg=0x55c24eead3a0 <__func__.33>
"qemu_mutex_destroy") at ../util/qemu-thread-posix.c:37
#3 0x000055c24e9654c3 in qemu_mutex_destroy (mutex=0x55c25133e0f0) at
../util/qemu-thread-posix.c:70
#4 0x000055c24e7cfaf1 in monitor_data_destroy_qmp (mon=0x55c25133dfd0) at
../monitor/qmp.c:439
#5 0x000055c24e7d23bc in monitor_data_destroy (mon=0x55c25133dfd0) at
../monitor/monitor.c:615
#6 0x000055c24e7d253a in monitor_cleanup () at ../monitor/monitor.c:644
#7 0x000055c24e6cb002 in qemu_cleanup () at ../softmmu/vl.c:4549
#8 0x000055c24e0d259b in main (argc=24, argv=0x7ffff66b0d58,
envp=0x7ffff66b0e20) at ../softmmu/main.c:51
The reason is that qemu_mutex_destroy(&mon->qmp_queue_lock) is called
while mon->qmp_queue_lock is still held by the dispatcher coroutine.
This is fixed by correcting the order in monitor_cleanup() as I had
already noticed above.
For the sake of completeness, this is where the dispatcher coroutine has
yielded when the process crashes:
(gdb) l *0x000055c24e7cf3da
0x55c24e7cf3da is in monitor_qmp_dispatcher_co (../monitor/qmp.c:273).
268 * involves an AIO_WAIT_WHILE().
269 */
270 aio_co_schedule(qemu_get_aio_context(), qmp_dispatcher_co);
271 ===> qemu_coroutine_yield();
272
273 mon = req_obj->mon;
274 /* qmp_oob_enabled() might change after "qmp_capabilities" */
275 need_resume = !qmp_oob_enabled(mon) ||
276 mon->qmp_requests->length == QMP_REQ_QUEUE_LEN_MAX - 1;
277 qemu_mutex_unlock(&mon->qmp_queue_lock);
Please try the following patch. It fixes the problem for me.
Kevin
diff --git a/monitor/monitor.c b/monitor/monitor.c
index ceffe1a83b..84222cd130 100644
--- a/monitor/monitor.c
+++ b/monitor/monitor.c
@@ -632,23 +632,9 @@ void monitor_cleanup(void)
iothread_stop(mon_iothread);
}
- /* Flush output buffers and destroy monitors */
- qemu_mutex_lock(&monitor_lock);
- monitor_destroyed = true;
- while (!QTAILQ_EMPTY(&mon_list)) {
- Monitor *mon = QTAILQ_FIRST(&mon_list);
- QTAILQ_REMOVE(&mon_list, mon, entry);
- /* Permit QAPI event emission from character frontend release */
- qemu_mutex_unlock(&monitor_lock);
- monitor_flush(mon);
- monitor_data_destroy(mon);
- qemu_mutex_lock(&monitor_lock);
- g_free(mon);
- }
- qemu_mutex_unlock(&monitor_lock);
-
/*
- * The dispatcher needs to stop before destroying the I/O thread.
+ * The dispatcher needs to stop before destroying the monitor and
+ * the I/O thread.
*
* We need to poll both qemu_aio_context and iohandler_ctx to make
* sure that the dispatcher coroutine keeps making progress and
@@ -665,6 +651,21 @@ void monitor_cleanup(void)
(aio_poll(iohandler_get_aio_context(), false),
qatomic_mb_read(&qmp_dispatcher_co_busy)));
+ /* Flush output buffers and destroy monitors */
+ qemu_mutex_lock(&monitor_lock);
+ monitor_destroyed = true;
+ while (!QTAILQ_EMPTY(&mon_list)) {
+ Monitor *mon = QTAILQ_FIRST(&mon_list);
+ QTAILQ_REMOVE(&mon_list, mon, entry);
+ /* Permit QAPI event emission from character frontend release */
+ qemu_mutex_unlock(&monitor_lock);
+ monitor_flush(mon);
+ monitor_data_destroy(mon);
+ qemu_mutex_lock(&monitor_lock);
+ g_free(mon);
+ }
+ qemu_mutex_unlock(&monitor_lock);
+
if (mon_iothread) {
iothread_destroy(mon_iothread);
mon_iothread = NULL;
- [PULL 01/14] monitor: Add Monitor parameter to monitor_set_cpu(), (continued)
- [PULL 01/14] monitor: Add Monitor parameter to monitor_set_cpu(), Markus Armbruster, 2020/10/09
- [PULL 04/14] hmp: Update current monitor only in handle_hmp_command(), Markus Armbruster, 2020/10/09
- [PULL 09/14] qmp: Move dispatcher to a coroutine, Markus Armbruster, 2020/10/09
- Re: [PULL 09/14] qmp: Move dispatcher to a coroutine, Alex Bennée, 2020/10/12
- Re: [PULL 09/14] qmp: Move dispatcher to a coroutine, Kevin Wolf, 2020/10/12
- Re: [PULL 09/14] qmp: Move dispatcher to a coroutine, Philippe Mathieu-Daudé, 2020/10/12
- Re: [PULL 09/14] qmp: Move dispatcher to a coroutine, Kevin Wolf, 2020/10/12
- Re: [PULL 09/14] qmp: Move dispatcher to a coroutine, Alex Bennée, 2020/10/12
- Re: [PULL 09/14] qmp: Move dispatcher to a coroutine, Ben Widawsky, 2020/10/12
- Re: [PULL 09/14] qmp: Move dispatcher to a coroutine, Philippe Mathieu-Daudé, 2020/10/13
- Re: [PULL 09/14] qmp: Move dispatcher to a coroutine,
Kevin Wolf <=
Re: [PULL 09/14] qmp: Move dispatcher to a coroutine, Volker Rümelin, 2020/10/17
[PULL 07/14] monitor: Make current monitor a per-coroutine property, Markus Armbruster, 2020/10/09
[PULL 03/14] monitor: Use getter/setter functions for cur_mon, Markus Armbruster, 2020/10/09
[PULL 08/14] qapi: Add a 'coroutine' flag for commands, Markus Armbruster, 2020/10/09
[PULL 11/14] util/async: Add aio_co_reschedule_self(), Markus Armbruster, 2020/10/09
[PULL 10/14] hmp: Add support for coroutine command handlers, Markus Armbruster, 2020/10/09
[PULL 13/14] block: Add bdrv_lock()/unlock(), Markus Armbruster, 2020/10/09
Re: [PULL 00/14] Monitor patches for 2020-10-09, Peter Maydell, 2020/10/09