qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] monitor: hmp_qemu_io: acquire aio contex, fix crash


From: Philippe Mathieu-Daudé
Subject: Re: [PATCH] monitor: hmp_qemu_io: acquire aio contex, fix crash
Date: Wed, 21 Apr 2021 21:47:55 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.8.1

On 4/21/21 10:32 AM, Vladimir Sementsov-Ogievskiy wrote:
> Max reported the following bug:
> 
> $ ./qemu-img create -f raw src.img 1G
> $ ./qemu-img create -f raw dst.img 1G
> 
> $ (echo '
>    {"execute":"qmp_capabilities"}
>    {"execute":"blockdev-mirror",
>     "arguments":{"job-id":"mirror",
>                  "device":"source",
>                  "target":"target",
>                  "sync":"full",
>                  "filter-node-name":"mirror-top"}}
> '; sleep 3; echo '
>    {"execute":"human-monitor-command",
>     "arguments":{"command-line":
>                  "qemu-io mirror-top \"write 0 1G\""}}') \
> | x86_64-softmmu/qemu-system-x86_64 \
>    -qmp stdio \
>    -blockdev file,node-name=source,filename=src.img \
>    -blockdev file,node-name=target,filename=dst.img \
>    -object iothread,id=iothr0 \
>    -device virtio-blk,drive=source,iothread=iothr0
> 
> crashes:
> 
> 0  raise () at /usr/lib/libc.so.6
> 1  abort () at /usr/lib/libc.so.6
> 2  error_exit
>    (err=<optimized out>,
>    msg=msg@entry=0x55fbb1634790 <__func__.27> "qemu_mutex_unlock_impl")
>    at ../util/qemu-thread-posix.c:37
> 3  qemu_mutex_unlock_impl
>    (mutex=mutex@entry=0x55fbb25ab6e0,
>    file=file@entry=0x55fbb1636957 "../util/async.c",
>    line=line@entry=650)
>    at ../util/qemu-thread-posix.c:109
> 4  aio_context_release (ctx=ctx@entry=0x55fbb25ab680) at ../util/async.c:650
> 5  bdrv_do_drained_begin
>    (bs=bs@entry=0x55fbb3a87000, recursive=recursive@entry=false,
>    parent=parent@entry=0x0,
>    ignore_bds_parents=ignore_bds_parents@entry=false,
>    poll=poll@entry=true) at ../block/io.c:441
> 6  bdrv_do_drained_begin
>    (poll=true, ignore_bds_parents=false, parent=0x0, recursive=false,
>    bs=0x55fbb3a87000) at ../block/io.c:448
> 7  blk_drain (blk=0x55fbb26c5a00) at ../block/block-backend.c:1718
> 8  blk_unref (blk=0x55fbb26c5a00) at ../block/block-backend.c:498
> 9  blk_unref (blk=0x55fbb26c5a00) at ../block/block-backend.c:491
> 10 hmp_qemu_io (mon=0x7fffaf3fc7d0, qdict=<optimized out>)
>    at ../block/monitor/block-hmp-cmds.c:628
> 
> man pthread_mutex_unlock
> ...
>     EPERM  The  mutex type is PTHREAD_MUTEX_ERRORCHECK or
>     PTHREAD_MUTEX_RECURSIVE, or the mutex is a robust mutex, and the
>     current thread does not own the mutex.
> 
> So, thread doesn't own the mutex. And we have iothread here.
> 
> Next, note that AIO_WAIT_WHILE() documents that ctx must be acquired
> exactly once by caller. But where is it acquired in the call stack?
> Seems nowhere.
> 
> qemuio_command do acquire aio context.. But we need context acquired
> around blk_unref as well. Let's do it.
> 
> Reported-by: Max Reitz <mreitz@redhat.com>
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/monitor/block-hmp-cmds.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c
> index ebf1033f31..934100d0eb 100644
> --- a/block/monitor/block-hmp-cmds.c
> +++ b/block/monitor/block-hmp-cmds.c
> @@ -559,6 +559,7 @@ void hmp_qemu_io(Monitor *mon, const QDict *qdict)
>  {
>      BlockBackend *blk;
>      BlockBackend *local_blk = NULL;
> +    AioContext *ctx;
>      bool qdev = qdict_get_try_bool(qdict, "qdev", false);
>      const char *device = qdict_get_str(qdict, "device");
>      const char *command = qdict_get_str(qdict, "command");
> @@ -615,7 +616,13 @@ void hmp_qemu_io(Monitor *mon, const QDict *qdict)
>      qemuio_command(blk, command);
>  
>  fail:
> +    ctx = blk_get_aio_context(blk);
> +    aio_context_acquire(ctx);
> +
>      blk_unref(local_blk);
> +
> +    aio_context_release(ctx);

I dare to mention "code smell" here... Not to your fix, but to
the API. Can't we simplify it somehow? Maybe we can't, I don't
understand it well. But it seems bug prone, and expensive in
human brain resources (either develop, debug or review).

>      hmp_handle_error(mon, err);
>  }
>  
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]