qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: iotest 030 still occasionally intermittently failing


From: Vladimir Sementsov-Ogievskiy
Subject: Re: iotest 030 still occasionally intermittently failing
Date: Thu, 19 Nov 2020 22:30:09 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.4.3

19.11.2020 19:11, Vladimir Sementsov-Ogievskiy wrote:
16.11.2020 20:59, Peter Maydell wrote:
On Mon, 16 Nov 2020 at 17:34, Alberto Garcia <berto@igalia.com> wrote:
Do you know if there is a core dump or stack trace available ?

Nope, sorry. What you get is what the 'vm-build-netbsd' etc targets
produce, so if you want more diagnostics on failures you have to
arrange for the test harness to produce them...

thanks
-- PMM


Hi!

After some iterations I've reproduced on SIGABRT:

#0  0x00007feb701bae35 in raise () at /lib64/libc.so.6
#1  0x00007feb701a5895 in abort () at /lib64/libc.so.6
#2  0x00007feb701a5769 in _nl_load_domain.cold () at /lib64/libc.so.6
#3  0x00007feb701b3566 in annobin_assert.c_end () at /lib64/libc.so.6
#4  0x000055a93374f7d3 in bdrv_replace_child (child=0x55a9363a3a00, new_bs=0x0) 
at ../block.c:2648
#5  0x000055a93374fd5a in bdrv_detach_child (child=0x55a9363a3a00) at 
../block.c:2777
#6  0x000055a93374fd9c in bdrv_root_unref_child (child=0x55a9363a3a00) at 
../block.c:2789
#7  0x000055a933722e8b in block_job_remove_all_bdrv (job=0x55a935f4f4b0) at 
../blockjob.c:191
#8  0x000055a933722bb2 in block_job_free (job=0x55a935f4f4b0) at 
../blockjob.c:88
#9  0x000055a9337755fa in job_unref (job=0x55a935f4f4b0) at ../job.c:380
#10 0x000055a9337767a6 in job_exit (opaque=0x55a935f4f4b0) at ../job.c:894
#11 0x000055a93386037e in aio_bh_call (bh=0x55a9352e16b0) at ../util/async.c:136
#12 0x000055a933860488 in aio_bh_poll (ctx=0x55a9351366f0) at 
../util/async.c:164
#13 0x000055a93383151e in aio_dispatch (ctx=0x55a9351366f0) at 
../util/aio-posix.c:381
#14 0x000055a9338608b9 in aio_ctx_dispatch (source=0x55a9351366f0, 
callback=0x0, user_data=0x0)
     at ../util/async.c:306
#15 0x00007feb71349ecd in g_main_context_dispatch () at /lib64/libglib-2.0.so.0
#16 0x000055a933840300 in glib_pollfds_poll () at ../util/main-loop.c:221
#17 0x000055a93384037a in os_host_main_loop_wait (timeout=0) at 
../util/main-loop.c:244
#18 0x000055a933840482 in main_loop_wait (nonblocking=0) at 
../util/main-loop.c:520
#19 0x000055a933603979 in qemu_main_loop () at ../softmmu/vl.c:1678
#20 0x000055a933190046 in main (argc=20, argv=0x7ffd48c31138, 
envp=0x7ffd48c311e0)

(gdb) fr 4
#4  0x000055a93374f7d3 in bdrv_replace_child (child=0x55a9363a3a00, new_bs=0x0) 
at ../block.c:2648
2648            assert(tighten_restrictions == false);
(gdb) list
2643            int ret;
2644
2645            bdrv_get_cumulative_perm(old_bs, &perm, &shared_perm);
2646            ret = bdrv_check_perm(old_bs, NULL, perm, shared_perm, NULL,
2647                                  &tighten_restrictions, NULL);
2648            assert(tighten_restrictions == false);
2649            if (ret < 0) {
2650                /* We only tried to loosen restrictions, so errors are not 
fatal */
2651                bdrv_abort_perm_update(old_bs);
2652            } else {
(gdb) p tighten_restrictions
$1 = true



I've modified code a bit, to crash when we actually want to set 
tighten_restrictions to true, and get the following bt:
#0  0x00007f6dbb49ee35 in raise () at /lib64/libc.so.6
#1  0x00007f6dbb489895 in abort () at /lib64/libc.so.6
#2  0x000055b9174104d7 in bdrv_check_perm
    (bs=0x55b918f09720, q=0x0, cumulative_perms=1, cumulative_shared_perms=21, 
ignore_children=0x55b918a57b20 = {...}, tighten_restrictions=0x55b917b044f8 
<abort_on_set_to_true>, errp=0x0) at ../block.c:2009
#3  0x000055b917410ec0 in bdrv_check_update_perm
    (bs=0x55b918f09720, q=0x0, new_used_perm=1, new_shared_perm=21, 
ignore_children=0x55b918a57b20 = {...}, tighten_restrictions=0x55b917b044f8 
<abort_on_set_to_true>, errp=0x0) at ../block.c:2280
#4  0x000055b917410f38 in bdrv_child_check_perm
    (c=0x55b91921fcf0, q=0x0, perm=1, shared=21, ignore_children=0x55b918a57b20 = 
{...}, tighten_restrictions=0x55b917b044f8 <abort_on_set_to_true>, errp=0x0) at 
../block.c:2294
#5  0x000055b91741078c in bdrv_check_perm
    (bs=0x55b918defd90, q=0x0, cumulative_perms=1, cumulative_shared_perms=21, 
ignore_children=0x0, tighten_restrictions=0x55b917b044f8 
<abort_on_set_to_true>, errp=0x0) at ../block.c:2076
#6  0x000055b91741194e in bdrv_replace_child (child=0x55b919cf6200, new_bs=0x0) 
at ../block.c:2666
#7  0x000055b917411f1d in bdrv_detach_child (child=0x55b919cf6200) at 
../block.c:2798
#8  0x000055b917411f5f in bdrv_root_unref_child (child=0x55b919cf6200) at 
../block.c:2810
#9  0x000055b9173e4d88 in block_job_remove_all_bdrv (job=0x55b918f06a60) at 
../blockjob.c:191
#10 0x000055b9173e4aaf in block_job_free (job=0x55b918f06a60) at 
../blockjob.c:88
#11 0x000055b917437aca in job_unref (job=0x55b918f06a60) at ../job.c:380
#12 0x000055b917438c76 in job_exit (opaque=0x55b918f06a60) at ../job.c:894
#13 0x000055b917522a57 in aio_bh_call (bh=0x55b919a2b3b0) at ../util/async.c:136
#14 0x000055b917522b61 in aio_bh_poll (ctx=0x55b918a866f0) at 
../util/async.c:164
#15 0x000055b9174f3bf7 in aio_dispatch (ctx=0x55b918a866f0) at 
../util/aio-posix.c:381
#16 0x000055b917522f92 in aio_ctx_dispatch (source=0x55b918a866f0, 
callback=0x0, user_data=0x0)
    at ../util/async.c:306
#17 0x00007f6dbc62decd in g_main_context_dispatch () at /lib64/libglib-2.0.so.0
#18 0x000055b9175029d9 in glib_pollfds_poll () at ../util/main-loop.c:221
#19 0x000055b917502a53 in os_host_main_loop_wait (timeout=0) at 
../util/main-loop.c:244
#20 0x000055b917502b5b in main_loop_wait (nonblocking=0) at 
../util/main-loop.c:520
#21 0x000055b9172c5979 in qemu_main_loop () at ../softmmu/vl.c:1678
#22 0x000055b916e52046 in main (argc=20, argv=0x7fff7f81f208, 
envp=0x7fff7f81f2b0)


and the picture taken at the moment of abort (and it is the same as at the 
moment before bdrv_replace_child call) is attached. So, it looks like graph is 
already corrupted (you see that backing permissions are not propagated to 
node2-node0 child).

How graph was corrupted it's still the question..

--
Best regards,
Vladimir

Attachment: abort.png
Description: PNG image


reply via email to

[Prev in Thread] Current Thread [Next in Thread]