qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH 2/3] block: Allow bdrv_run_co() from different AioContext


From: Stefan Reiter
Subject: Re: [RFC PATCH 2/3] block: Allow bdrv_run_co() from different AioContext
Date: Mon, 25 May 2020 16:18:54 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.9.0

On 5/12/20 4:43 PM, Kevin Wolf wrote:
Coroutine functions that are entered through bdrv_run_co() are already
safe to call from synchronous code in a different AioContext because
bdrv_coroutine_enter() will schedule them in the context of the node.

However, the coroutine fastpath still requires that we're already in the
right AioContext when called in coroutine context.

In order to make the behaviour more consistent and to make life a bit
easier for callers, let's check the AioContext and automatically move
the current coroutine around if we're not in the right context yet.

Signed-off-by: Kevin Wolf <address@hidden>
---
  block/io.c | 15 ++++++++++++++-
  1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/block/io.c b/block/io.c
index c1badaadc9..7808e8bdc0 100644
--- a/block/io.c
+++ b/block/io.c
@@ -895,8 +895,21 @@ static int bdrv_run_co(BlockDriverState *bs, 
CoroutineEntry *entry,
                         void *opaque, int *ret)
  {
      if (qemu_in_coroutine()) {
-        /* Fast-path if already in coroutine context */
+        Coroutine *self = qemu_coroutine_self();
+        AioContext *bs_ctx = bdrv_get_aio_context(bs);
+        AioContext *co_ctx = qemu_coroutine_get_aio_context(self);
+
+        if (bs_ctx != co_ctx) {
+            /* Move to the iothread of the node */
+            aio_co_schedule(bs_ctx, self);
+            qemu_coroutine_yield();

I'm pretty sure this can lead to a race: When the thread we're re-scheduling to is faster to schedule us than we can reach qemu_coroutine_yield, then we'll get an abort ("Co-routine re-entered recursively"), since co->caller is still set.

I've seen this happen in our code when I try to do the scheduling fandangle there.

Is there a safer way to have a coroutine reschedule itself? Some lock missing?

+        }
          entry(opaque);
+        if (bs_ctx != co_ctx) {
+            /* Move back to the original AioContext */
+            aio_co_schedule(bs_ctx, self);
+            qemu_coroutine_yield();
+        }
      } else {
          Coroutine *co = qemu_coroutine_create(entry, opaque);
          *ret = NOT_DONE;





reply via email to

[Prev in Thread] Current Thread [Next in Thread]