qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v3 2/2] QEMUBH: make AioContext's bh re-entrant


From: liu ping fan
Subject: Re: [Qemu-devel] [PATCH v3 2/2] QEMUBH: make AioContext's bh re-entrant
Date: Fri, 21 Jun 2013 12:35:41 +0800

[...]
>>>
>>> qemu_bh_delete is safe as long as you wait for the bottom half to stop
>>> before deleting the containing object.  Once we have RCU, deletion of
>>> QOM objects will be RCU-protected.  Hence, a simple way could be to put
>>> the first part of aio_bh_poll() within rcu_read_lock/unlock.
>>>
>> In fact, I have some idea about this,  introduce another member -
>> Object for QEMUBH which will be refereed in cb, then we leave anything
>> to refcnt mechanism.
>> For qemu_bh_cancel(), I do not figure out whether it is important or
>> not to sync with caller.
>
> This is a separate patch anyway... and a long discussion to have before
> too. :)
>
> Let's concentrate on one thing at a time.
>
Yes, will do like this.

Regards,
Pingfan

> Paolo
>
>> diff --git a/async.c b/async.c
>> index 4b17eb7..60c35a1 100644
>> --- a/async.c
>> +++ b/async.c
>> @@ -61,6 +61,7 @@ int aio_bh_poll(AioContext *ctx)
>>  {
>>      QEMUBH *bh, **bhp, *next;
>>      int ret;
>> +    int sched;
>>
>>  {
>>      QEMUBH *bh, **bhp, *next;
>>      int ret;
>> +    int sched;
>>
>>      ctx->walking_bh++;
>>
>> @@ -69,8 +70,10 @@ int aio_bh_poll(AioContext *ctx)
>>          /* Make sure fetching bh before accessing its members */
>>          smp_read_barrier_depends();
>>          next = bh->next;
>> -        if (!bh->deleted && bh->scheduled) {
>> -            bh->scheduled = 0;
>> +        sched = 0;
>> +        atomic_xchg(&bh->scheduled, sched);
>
> This is expensive.
>
>> +        if (!bh->deleted && sched) {
>> +            //bh->scheduled = 0;
>>              if (!bh->idle)
>>                  ret = 1;
>>              bh->idle = 0;
>> @@ -79,6 +82,9 @@ int aio_bh_poll(AioContext *ctx)
>>               */
>>              smp_rmb();
>>              bh->cb(bh->opaque);
>> +            if (bh->obj) {
>> +                object_unref(bh->obj);
>> +            }
>>          }
>>      }
>>
>> @@ -105,8 +111,12 @@ int aio_bh_poll(AioContext *ctx)
>>
>>  void qemu_bh_schedule_idle(QEMUBH *bh)
>>  {
>> -    if (bh->scheduled)
>> +    int sched = 1;
>> +
>> +    atomic_xchg( &bh->scheduled, sched);
>> +    if (sched) {
>>          return;
>> +    }
>>      /* Make sure any writes that are needed by the callback are done
>>       * before the locations are read in the aio_bh_poll.
>>       */
>> @@ -117,25 +127,46 @@ void qemu_bh_schedule_idle(QEMUBH *bh)
>>
>>  void qemu_bh_schedule(QEMUBH *bh)
>>  {
>> -    if (bh->scheduled)
>> +    int sched = 1;
>> +
>> +    atomic_xchg( &bh->scheduled, sched);
>> +    if (sched) {
>>          return;
>> +    }
>>      /* Make sure any writes that are needed by the callback are done
>>       * before the locations are read in the aio_bh_poll.
>>       */
>>      smp_wmb();
>>      bh->scheduled = 1;
>> +    if (bh->obj) {
>> +        object_ref(bh->obj);
>> +    }
>>      bh->idle = 0;
>>      aio_notify(bh->ctx);
>>  }
>>
>>  void qemu_bh_cancel(QEMUBH *bh)
>>  {
>> -    bh->scheduled = 0;
>> +    int sched = 0;
>> +
>> +    atomic_xchg( &bh->scheduled, sched);
>> +    if (sched) {
>> +        if (bh->obj) {
>> +            object_ref(bh->obj);
>> +        }
>> +    }
>>  }
>>
>>  void qemu_bh_delete(QEMUBH *bh)
>>  {
>> -    bh->scheduled = 0;
>> +    int sched = 0;
>> +
>> +    atomic_xchg( &bh->scheduled, sched);
>> +    if (sched) {
>> +        if (bh->obj) {
>> +            object_ref(bh->obj);
>> +        }
>> +    }
>>      bh->deleted = 1;
>>  }
>>
>> Regards,
>> Pingfan
>>>> The other thing I'm unclear on is the ->idle assignment followed
>>>> immediately by a ->scheduled assignment.  Without memory barriers
>>>> aio_bh_poll() isn't guaranteed to get an ordered view of these updates:
>>>> it may see an idle BH as a regular scheduled BH because ->idle is still
>>>> 0.
>>>
>>> Right.  You need to order ->idle writes before ->scheduled writes, and
>>> add memory barriers, or alternatively use two bits in ->scheduled so
>>> that you can assign both atomically.
>>>
>>> Paolo
>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]