qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] aio: add aio_context_acquire() and aio_context_re


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [RFC] aio: add aio_context_acquire() and aio_context_release()
Date: Fri, 30 Aug 2013 16:25:50 +0200

On Fri, Aug 30, 2013 at 3:24 PM, Paolo Bonzini <address@hidden> wrote:
> Il 30/08/2013 11:22, Stefan Hajnoczi ha scritto:
>> On Thu, Aug 29, 2013 at 10:26:31AM +0200, Paolo Bonzini wrote:
>>> Il 27/08/2013 16:39, Stefan Hajnoczi ha scritto:
>>>> +void aio_context_acquire(AioContext *ctx)
>>>> +{
>>>> +    qemu_mutex_lock(&ctx->acquire_lock);
>>>> +    while (ctx->owner) {
>>>> +        assert(!qemu_thread_is_self(ctx->owner));
>>>> +        aio_notify(ctx); /* kick current owner */
>>>> +        qemu_cond_wait(&ctx->acquire_cond, &ctx->acquire_lock);
>>>> +    }
>>>> +    qemu_thread_get_self(&ctx->owner_thread);
>>>> +    ctx->owner = &ctx->owner_thread;
>>>> +    qemu_mutex_unlock(&ctx->acquire_lock);
>>>> +}
>>>> +
>>>> +void aio_context_release(AioContext *ctx)
>>>> +{
>>>> +    qemu_mutex_lock(&ctx->acquire_lock);
>>>> +    assert(ctx->owner && qemu_thread_is_self(ctx->owner));
>>>> +    ctx->owner = NULL;
>>>> +    qemu_cond_signal(&ctx->acquire_cond);
>>>> +    qemu_mutex_unlock(&ctx->acquire_lock);
>>>> +}
>>>
>>> Thinking more about it, there is a risk of busy waiting here if one
>>> thread releases the AioContext and tries to acquire it again (as in the
>>> common case of one thread doing acquire/poll/release in a loop).  It
>>> would only work if mutexes guarantee some level of fairness.
>>
>> You are right.  I wrote a test that showed there is no fairness.  For
>> some reason I thought the condvar would provide fairness.
>>
>>> If you implement recursive acquisition, however, you can make aio_poll
>>> acquire the context up until just before it invokes ppoll, and then
>>> again after it comes back from the ppoll.  The two acquire/release pair
>>> will be no-ops if called during "synchronous" I/O such as
>>>
>>>   /* Another thread */
>>>   aio_context_acquire(ctx);
>>>   bdrv_read(bs, 0x1000, buf, 1);
>>>   aio_context_release(ctx);
>>>
>>> Yet they will do the right thing when called from the event loop thread.
>>>
>>> (where the bdrv_read can actually be something more complicated such as
>>> a live snapshot or, in general, anything involving bdrv_drain_all).
>>
>> This doesn't guarantee fairness either, right?
>
> Yes, but the non-zero timeout of ppoll would in practice guarantee it.
> The problem happens only when the release and acquire are very close in
> time, which shouldn't happen if the ppoll is done released.
>
>> With your approach another thread can squeeze in when ppoll(2) is
>> returning so newer fd activity can be processed before we processed
>> *before* older activity.  Not sure out-of-order callbacks are a problem
>> but it can happen since we don't have fairness.
>
> I think this should not happen.  The other thread would rerun ppoll(2).
>  Since poll/ppoll are level-triggered, you could have some flags
> processed twice.  But this is not a problem, we had the same bug with
> iothread and qemu_aio_wait and we should have fixed all occurrences.

I forgot they are level-triggered.  Releasing around the blocking
operation (ppoll) is similar to how iothread/vcpu thread work so it
seems like a good idea to follow that pattern here too.

I'll implement this in the next revision.

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]