qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC] block/nbd: Move s->ioc on AioContext change


From: Hanna Reitz
Subject: Re: [RFC] block/nbd: Move s->ioc on AioContext change
Date: Tue, 1 Feb 2022 17:14:26 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.5.0

On 01.02.22 12:40, Hanna Reitz wrote:
On 01.02.22 12:18, Vladimir Sementsov-Ogievskiy wrote:
28.01.2022 18:51, Hanna Reitz wrote:
s->ioc must always be attached to the NBD node's AioContext.  If that
context changes, s->ioc must be attached to the new context.

Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1990835
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
---
This is an RFC because I believe there are some other things in the NBD
block driver that need attention on an AioContext change, too. Namely,
there are two timers (reconnect_delay_timer and open_timer) that are
also attached to the node's AioContext, and I'm afraid they need to be
handled, too.  Probably pause them on detach, and resume them on attach,
but I'm not sure, which is why I'm posting this as an RFC to get some
comments from that from someone who knows this code better than me. :)

(Also, in a real v1, of course I'd want to add a regression test.)
---
  block/nbd.c | 28 ++++++++++++++++++++++++++++
  1 file changed, 28 insertions(+)

diff --git a/block/nbd.c b/block/nbd.c
index 63dbfa807d..119a774c04 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -2036,6 +2036,25 @@ static void nbd_cancel_in_flight(BlockDriverState *bs)
      nbd_co_establish_connection_cancel(s->conn);
  }
  +static void nbd_attach_aio_context(BlockDriverState *bs,
+                                   AioContext *new_context)
+{
+    BDRVNBDState *s = bs->opaque;
+
+    if (s->ioc) {
+        qio_channel_attach_aio_context(s->ioc, new_context);
+    }
+}
+
+static void nbd_detach_aio_context(BlockDriverState *bs)
+{
+    BDRVNBDState *s = bs->opaque;
+
+    if (s->ioc) {
+        qio_channel_detach_aio_context(s->ioc);
+    }
+}
+
  static BlockDriver bdrv_nbd = {
      .format_name                = "nbd",
      .protocol_name              = "nbd",
@@ -2059,6 +2078,9 @@ static BlockDriver bdrv_nbd = {
      .bdrv_dirname               = nbd_dirname,
      .strong_runtime_opts        = nbd_strong_runtime_opts,
      .bdrv_cancel_in_flight      = nbd_cancel_in_flight,
+
+    .bdrv_attach_aio_context    = nbd_attach_aio_context,
+    .bdrv_detach_aio_context    = nbd_detach_aio_context,
  };
    static BlockDriver bdrv_nbd_tcp = {
@@ -2084,6 +2106,9 @@ static BlockDriver bdrv_nbd_tcp = {
      .bdrv_dirname               = nbd_dirname,
      .strong_runtime_opts        = nbd_strong_runtime_opts,
      .bdrv_cancel_in_flight      = nbd_cancel_in_flight,
+
+    .bdrv_attach_aio_context    = nbd_attach_aio_context,
+    .bdrv_detach_aio_context    = nbd_detach_aio_context,
  };
    static BlockDriver bdrv_nbd_unix = {
@@ -2109,6 +2134,9 @@ static BlockDriver bdrv_nbd_unix = {
      .bdrv_dirname               = nbd_dirname,
      .strong_runtime_opts        = nbd_strong_runtime_opts,
      .bdrv_cancel_in_flight      = nbd_cancel_in_flight,
+
+    .bdrv_attach_aio_context    = nbd_attach_aio_context,
+    .bdrv_detach_aio_context    = nbd_detach_aio_context,
  };
    static void bdrv_nbd_init(void)



Hmm. I was so happy to remove these handlers together with connection-coroutine :) . But you are right, seems I've removed too much :(.


open_timer exists only during bdrv_open() handler, so, I hope on attach/detach it should not exist.

That’s… kind of surprising.  It’s good for me here, but as far as I can see it means that all of qemu blocks until the connection succeeds, right?  That doesn’t seem quite ideal...

Anyway, good for me. O:)

reconnect_delay_timer should exist only during IO request: it's created during request if we don't have a connection. And request will not finish until timer elapsed or connection established (timer should be removed in this case too). So, again, when attaching / detaching the context we should be in a drained sections, so no in-flight requests and no reconnect_delay_timer.

Got it.  FWIW, other block drivers rely on this, too (e.g. null-aio with latency-ns set creates a timer in every I/O request and settles the request once the timer expires).

Looks like the timer isn’t removed when the connection is reestablished.  When I add an `assert(!s->reconnect_delay_timer)` to `nbd_attach_aio_context()` (on top of this patch), then I get:

$ ./qemu-nbd \
    --fork \
    --pid-file=/tmp/nbd.pid \
    --socket=/tmp/nbd.sock \
    -f raw \
    null-co://

$ (echo '{"execute": "qmp_capabilities"}';
sleep 1;
kill $(cat /tmp/nbd.pid);
./qemu-nbd \
    --fork \
    --pid-file=/tmp/nbd.pid \
    --socket=/tmp/nbd.sock \
    -f raw \
    null-co://;
echo '{"execute": "human-monitor-command",
       "arguments": {"command-line": "qemu-io nbd \"write 0 64k\""}}';
echo '{"execute": "x-blockdev-set-iothread",
       "arguments": {"node-name": "nbd", "iothread": "iothr0"}}') \
| ./qemu-system-x86_64 \
    -qmp stdio \
    -blockdev '{
        "node-name": "nbd",
        "driver": "nbd",
        "reconnect-delay": 1,
        "server": {
            "type": "unix",
            "path": "/tmp/nbd.sock"
        } }' \
    -object iothread,id=iothr0
{"QMP": {"version": {"qemu": {"micro": 50, "minor": 2, "major": 6}, "package": "v6.2.0-1288-ge3116c38f7-dirty"}, "capabilities": ["oob"]}}
{"return": {}}
wrote 65536/65536 bytes at offset 0
64 KiB, 1 ops; 00.00 sec (170.326 MiB/sec and 2725.2189 ops/sec)
{"return": ""}
qemu-system-x86_64: ../block/nbd.c:2044: nbd_attach_aio_context: Assertion `!s->reconnect_delay_timer' failed.
Aborted (core dumped)


(The above kills the NBD server and immediately starts it, so that the following write request will have to reconnect, and immediately succeed.  The failed assertion when changing the AioContext shows that the timer is still there after successfully reconnecting.)

Not sure whether that’s a problem in normal operation.  On master, there’s no failure, of course, the only problem is that `reconnect_delay_timer_cb()` will probably be run in the old context.  If in the new context we then have a concurrent reconnection attempt, perhaps the `reconnect_delay_timer_del()` might interfere with `reconnect_delay_timer_init()`, such that the former frees the timer (and sets it to NULL), and then the `timer_mod()` call in the latter function accesses NULL.  But that’d be extremely difficult to test, because that’s a very small time window...

I can definitely see the following problem with this RFC patch applied, though I don’t quite understand it:

./qemu-nbd \
    --fork \
    --pid-file=/tmp/nbd.pid \
    --socket=/tmp/nbd.sock \
    -f raw \
    null-co://
(echo '{"execute": "qmp_capabilities"}';
sleep 1;
kill $(cat /tmp/nbd.pid);
./qemu-nbd \
    --fork \
    --pid-file=/tmp/nbd.pid \
    --socket=/tmp/nbd.sock \
    -f raw \
    null-co://;
echo '{"execute": "human-monitor-command",
       "arguments": {"command-line": "qemu-io nbd \"write 0 64k\""}}';
echo '{"execute": "x-blockdev-set-iothread",
       "arguments": {"node-name": "nbd", "iothread": "iothr0"}}';
sleep 2;
kill $(cat /tmp/nbd.pid);
./qemu-nbd \
    --fork \
    --pid-file=/tmp/nbd.pid \
    --socket=/tmp/nbd.sock \
    -f raw \
    null-co://;
echo '{"execute": "human-monitor-command",
       "arguments": {"command-line": "qemu-io nbd \"write 0 64k\""}}';
echo '{"execute": "quit"}') \
| ./qemu-system-x86_64 \
    -qmp stdio \
    -blockdev '{
        "node-name": "nbd",
        "driver": "nbd",
        "reconnect-delay": 1,
        "server": {
            "type": "unix",
            "path": "/tmp/nbd.sock"
        } }' \
    -object iothread,id=iothr0
{"QMP": {"version": {"qemu": {"micro": 50, "minor": 2, "major": 6}, "package": "v6.2.0-1129-g731bf9ede7"}, "capabilities": ["oob"]}}
{"return": {}}
wrote 65536/65536 bytes at offset 0
64 KiB, 1 ops; 00.00 sec (191.279 MiB/sec and 3060.4719 ops/sec)
{"return": ""}
{"return": {}}
wrote 65536/65536 bytes at offset 0
64 KiB, 1 ops; 00.00 sec (159.672 MiB/sec and 2554.7483 ops/sec)
{"return": ""}
{"return": {}}
{"timestamp": {"seconds": 1643731721, "microseconds": 22290}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}} qemu-system-x86_64: ../util/qemu-timer.c:115: timerlist_free: Assertion `!timerlist_has_timers(timer_list)' failed.
Aborted (core dumped)


I.e.:
1. Kill/restart the NBD server, as above, so that the reconnect on write succeeds immediately
2. Move the NBD server to a different AioContext
3. Wait two seconds, so that the reconnect timer expires
4. Repeat step 1, which will install a new reconnect timer
5. Have qemu quit before that new timer instance can expire

I have tried stripping this down to just a single timer instance, but didn’t succeed.  I always needed one instance expire in the original context, and then start another one in the new context.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]