qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/2] aio: add aio_context_acquire() and aio_cont


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [PATCH 0/2] aio: add aio_context_acquire() and aio_context_release()
Date: Wed, 09 Oct 2013 12:16:47 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130923 Thunderbird/17.0.9

Il 09/10/2013 11:55, Stefan Hajnoczi ha scritto:
> aio_context_acquire() and aio_context_release() make it possible for multiple
> threads to safely operate on a shared AioContext.  This is a prerequisite for
> using the block layer outside the QEMU global mutex.
> 
> Imagine that a dataplane thread is performing I/O on behalf of the guest when
> the user issues a monitor command that needs to access the BlockDriverState.
> The monitor thread needs to acquire the AioContext before calling bdrv_*()
> functions on it.  This prevents the dataplane thread from interfering with the
> monitor thread.
> 
> There was a discussion in the RFC email thread about how to implement 
> fairness:
> each thread should get a turn to acquire the AioContext so that starvation is
> avoided.
> 
> Paolo suggested doing what the QEMU main loop does today: drop the lock before
> invoking ppoll(2).  It turns out that this is tricky since we want both 
> threads
> to be able to call aio_poll() simultaneously.  AioContext->pollfds[] currently
> prevents two simultaneous aio_poll() calls since it is a shared resource.  
> It's
> also tricky to avoid deadlocks if two threads execute aio_poll() 
> simulatenously
> except by waking all threads each time *any* thread makes any progress.
> 
> I found the simplest solution is to implement RFifoLock, a recursive lock with
> FIFO ordering.  This lock supports the semantics we need for the following
> pattern:
> 
>   /* Event loop thread */
>   while (running) {
>       aio_context_acquire(ctx);
>       aio_poll(ctx, true);
>       aio_context_release(ctx);
>   }
> 
>   /* Another thread */
>   aio_context_acquire(ctx);
>   bdrv_read(bs, 0x1000, buf, 1);
>   aio_context_release(ctx);

Should bdrv_read itself call aio_context_acquire?  If not, what's the
use of recursion?

> When the other thread wants to acquire the AioContext but the lock is
> contended, it uses aio_notify(ctx) to bump the event loop thread out of
> aio_poll().  Fairness ensures everyone gets an opportunity to use the
> AioContext.

The implementation of fairness is very nice.  The callbacks are an
interesting solution, too.  They ensure that the lock is released fast,
but at the same time some progress is allowed.

I don't like recursive mutexes very much, because it's easier to get
AB-BA deadlocks with them.  However, it's not easy to avoid them except
by adding a lot of acquire/release pairs.  And there's also a precedent
(the mmap_lock used by user-level emulation).

We can avoid the problem for now by heavily limiting the coarse-grained
mutexes we have---basically only the iothread mutex and this new lock.
We should aim at making locks fine-grained enough that we hardly have
any nesting, or use RCU so that the read-side is independent and the
write-side can just keep using the iothread mutex.

Still, in the long run it would be nice to push acquire/release pairs up
to all the users and avoid recursion...

Acked-by: Paolo Bonzini <address@hidden>

Paolo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]