[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 09/23] job: call job_enter from job_pause

From: Max Reitz
Subject: Re: [PATCH v4 09/23] job: call job_enter from job_pause
Date: Wed, 7 Apr 2021 13:19:51 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.8.0

On 16.01.21 22:46, Vladimir Sementsov-Ogievskiy wrote:
If main job coroutine called job_yield (while some background process
is in progress), we should give it a chance to call job_pause_point().
It will be used in backup, when moved on async block-copy.

Note, that job_user_pause is not enough: we want to handle
child_job_drained_begin() as well, which call job_pause().

Still, if job is already in job_do_yield() in job_pause_point() we
should not enter it.

iotest 109 output is modified: on stop we do bdrv_drain_all() which now
triggers job pause immediately (and pause after ready is standby).

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  job.c                      |  3 +++
  tests/qemu-iotests/109.out | 24 ++++++++++++++++++++++++
  2 files changed, 27 insertions(+)

While looking into


I noticed this:

$ ./qemu-img create -f raw src.img 1G
$ ./qemu-img create -f raw dst.img 1G

$ (echo '
'; sleep 3; echo '
                  "qemu-io mirror-top \"write 0 1G\""}}') \
| x86_64-softmmu/qemu-system-x86_64 \
    -qmp stdio \
    -blockdev file,node-name=source,filename=src.img \
    -blockdev file,node-name=target,filename=dst.img \
    -object iothread,id=iothr0 \
    -device virtio-blk,drive=source,iothread=iothr0

Before this commit, qemu-io returned an error that there is a permission conflict with virtio-blk. After this commit, there is an abort (“qemu: qemu_mutex_unlock_impl: Operation not permitted”):

#0  0x00007f8445a4eef5 in raise () at /usr/lib/libc.so.6
#1  0x00007f8445a38862 in abort () at /usr/lib/libc.so.6
#2  0x000055fbb14a36bf in error_exit
(err=<optimized out>, msg=msg@entry=0x55fbb1634790 <__func__.27> "qemu_mutex_unlock_impl")
    at ../util/qemu-thread-posix.c:37
#3  0x000055fbb14a3bc3 in qemu_mutex_unlock_impl
(mutex=mutex@entry=0x55fbb25ab6e0, file=file@entry=0x55fbb1636957 "../util/async.c", line=line@entry=650)
    at ../util/qemu-thread-posix.c:109
#4 0x000055fbb14b2e75 in aio_context_release (ctx=ctx@entry=0x55fbb25ab680) at ../util/async.c:650
#5  0x000055fbb13d2029 in bdrv_do_drained_begin
(bs=bs@entry=0x55fbb3a87000, recursive=recursive@entry=false, parent=parent@entry=0x0, ignore_bds_parents=ignore_bds_parents@entry=false, poll=poll@entry=true) at ../block/io.c:441
#6  0x000055fbb13d2192 in bdrv_do_drained_begin
(poll=true, ignore_bds_parents=false, parent=0x0, recursive=false, bs=0x55fbb3a87000) at ../block/io.c:448 #7 0x000055fbb13c71a7 in blk_drain (blk=0x55fbb26c5a00) at ../block/block-backend.c:1718 #8 0x000055fbb13c8bbd in blk_unref (blk=0x55fbb26c5a00) at ../block/block-backend.c:498
#9  blk_unref (blk=0x55fbb26c5a00) at ../block/block-backend.c:491
#10 0x000055fbb1024863 in hmp_qemu_io (mon=0x7fffaf3fc7d0, qdict=<optimized out>)
    at ../block/monitor/block-hmp-cmds.c:628

Can you make anything out of this?


reply via email to

[Prev in Thread] Current Thread [Next in Thread]