[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Qemu-devel] [PATCH 03/16] aio: introduce aio_poll_internal
From: |
Paolo Bonzini |
Subject: |
[Qemu-devel] [PATCH 03/16] aio: introduce aio_poll_internal |
Date: |
Fri, 15 Jan 2016 16:12:06 +0100 |
Move the implemntation of aio_poll to aio_poll_internal, and introduce
a wrapper for public use. For now it just asserts that aio_poll is
being used correctly---either from the thread that manages the context,
or with the QEMU global mutex held.
The next patch, however, will completely separate the two cases.
Signed-off-by: Paolo Bonzini <address@hidden>
---
aio-posix.c | 2 +-
aio-win32.c | 2 +-
async.c | 8 ++++++++
include/block/aio.h | 6 ++++++
4 files changed, 16 insertions(+), 2 deletions(-)
diff --git a/aio-posix.c b/aio-posix.c
index 482b316..980bd41 100644
--- a/aio-posix.c
+++ b/aio-posix.c
@@ -400,7 +400,7 @@ static void add_pollfd(AioHandler *node)
npfd++;
}
-bool aio_poll(AioContext *ctx, bool blocking)
+bool aio_poll_internal(AioContext *ctx, bool blocking)
{
AioHandler *node;
int i, ret;
diff --git a/aio-win32.c b/aio-win32.c
index cdc4456..6622cbf 100644
--- a/aio-win32.c
+++ b/aio-win32.c
@@ -280,7 +280,7 @@ bool aio_dispatch(AioContext *ctx)
return progress;
}
-bool aio_poll(AioContext *ctx, bool blocking)
+bool aio_poll_internal(AioContext *ctx, bool blocking)
{
AioHandler *node;
HANDLE events[MAXIMUM_WAIT_OBJECTS + 1];
diff --git a/async.c b/async.c
index b3efd3c..856aa75 100644
--- a/async.c
+++ b/async.c
@@ -299,6 +299,14 @@ void aio_notify_accept(AioContext *ctx)
}
}
+bool aio_poll(AioContext *ctx, bool blocking)
+{
+ assert(qemu_mutex_iothread_locked() ||
+ aio_context_in_iothread(ctx));
+
+ return aio_poll_internal(ctx, blocking);
+}
+
static void aio_timerlist_notify(void *opaque)
{
aio_notify(opaque);
diff --git a/include/block/aio.h b/include/block/aio.h
index 9434665..986be97 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -287,6 +287,12 @@ bool aio_pending(AioContext *ctx);
*/
bool aio_dispatch(AioContext *ctx);
+/* Same as aio_poll, but only meant for use in the I/O thread.
+ *
+ * This is used internally in the implementation of aio_poll.
+ */
+bool aio_poll_internal(AioContext *ctx, bool blocking);
+
/* Progress in completing AIO work to occur. This can issue new pending
* aio as a result of executing I/O completion or bh callbacks.
*
--
2.5.0
- [Qemu-devel] [PATCH 00/16] aio: first part of aio_context_acquire/release pushdown, Paolo Bonzini, 2016/01/15
- [Qemu-devel] [PATCH 02/16] aio: do not really acquire/release the main AIO context, Paolo Bonzini, 2016/01/15
- [Qemu-devel] [PATCH 01/16] aio: introduce aio_context_in_iothread, Paolo Bonzini, 2016/01/15
- [Qemu-devel] [PATCH 04/16] aio: only call aio_poll_internal from iothread, Paolo Bonzini, 2016/01/15
- [Qemu-devel] [PATCH 05/16] iothread: release AioContext around aio_poll, Paolo Bonzini, 2016/01/15
- [Qemu-devel] [PATCH 03/16] aio: introduce aio_poll_internal,
Paolo Bonzini <=
- [Qemu-devel] [PATCH 06/16] qemu-thread: introduce QemuRecMutex, Paolo Bonzini, 2016/01/15
- [Qemu-devel] [PATCH 07/16] aio: convert from RFifoLock to QemuRecMutex, Paolo Bonzini, 2016/01/15
- [Qemu-devel] [PATCH 10/16] aio: make ctx->list_lock a QemuLockCnt, subsuming ctx->walking_bh, Paolo Bonzini, 2016/01/15
- [Qemu-devel] [PATCH 09/16] qemu-thread: introduce QemuLockCnt, Paolo Bonzini, 2016/01/15
- [Qemu-devel] [PATCH 08/16] aio: rename bh_lock to list_lock, Paolo Bonzini, 2016/01/15
- [Qemu-devel] [PATCH 12/16] aio: tweak walking in dispatch phase, Paolo Bonzini, 2016/01/15
- [Qemu-devel] [PATCH 11/16] qemu-thread: optimize QemuLockCnt with futexes on Linux, Paolo Bonzini, 2016/01/15
- [Qemu-devel] [PATCH 14/16] aio-win32: remove walking_handlers, protecting AioHandler list with list_lock, Paolo Bonzini, 2016/01/15
- [Qemu-devel] [PATCH 13/16] aio-posix: remove walking_handlers, protecting AioHandler list with list_lock, Paolo Bonzini, 2016/01/15
- [Qemu-devel] [PATCH 15/16] aio: document locking, Paolo Bonzini, 2016/01/15