[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/3] make bh safe with hot-unplug

From: liu ping fan
Subject: Re: [Qemu-devel] [PATCH 0/3] make bh safe with hot-unplug
Date: Wed, 26 Jun 2013 17:44:13 +0800

On Wed, Jun 26, 2013 at 4:38 PM, Paolo Bonzini <address@hidden> wrote:
> Il 26/06/2013 10:20, liu ping fan ha scritto:
>>>> On the other hand, pushing _delete() out of  finalization path is not
>>>> easy, since we do not what time the DeviceState has done with its bh.
>>> See above:
>>> - if the BH will run in the iothread, the BH is definitely not running
>>> (because the BH runs under the BQL, and the reclaimer owns it)
>> It works for the case in which the DeviceState's reclaimer calls
>> _bh_delete(). But in another case(also exist in current code), where
>> we call _bh_delete() in callback, the bh will be scheduled, and oops!
> But you may know that the BH is not scheduled after removal, too.
No, the removal can run in parallel with the other mmio-dispatch which
can resort to schedule a bh. I.e, the removal can not call
_bh_delete(), so it do not know whether bh will be scheduled or not.

> There are not that many devices that have bottom halves (apart from
> those that use bottom halves in ptimer).  If they really need to, they
> can do the object_ref/unref themselves.  But I expect this to be rare,
> and generic code should not need an "owner" field in bottom halves.  For

Agree :), so the aim is how to  handle bh safely when unplug, if we
can resolve it with rcu, it will be more fine!
> example, block devices should stop sending requests after removal.
Yes, but need we need take account for "stop sending request != no
mmio-dispatch in fly" ? And I think these mmio-dispatch are the
criminal :),  making it is hard to sync with bh's stop.
>>> - if the BH is running in another thread, waiting for that thread to
>>> terminate obviously makes the BH not running.
>> This imply that except for qemu_aio_context, AioContext can be only
>> shared by one device, right? Otherwise we can not just terminate the
>> thread. If it is true, why can not we have more than one just like
>> qemu_aio_context?
> Yes, if you do it that way you can have only one AioContext per device.
>  RCU is another way, and doesn't have the same limitation.
> We should avoid introducing infrastructure that we are not sure is
> needed.  For what it's worth, your patches to make the bottom half list
> thread-safe are also slightly premature.  However, they do not change
> the API and it makes some sense to add the infrastructure.  In this
> case, I'm simply not sure that we're there yet.
See, thx.
> If you look at the memory work, for example, the owner patches happen to
> be useful for BQL-less dispatch too, but they are solving existing
> hot-unplug bugs.
Oh, I will read it again, since I had thought the owner is only used
for the purpose of refcnt.

>>> What we need is separation of removal and reclamation.  Without that,
>>> any effort to make things unplug-safe is going to be way way more
>>> complicated than necessary.
>> Agree, but when I tried for bh, I found the separation of removal and
>> reclamation are not easy.  We can not _bh_delete() in
>> acpi_piix_eject_slot(), because mmio-dispatch can schedule a bh at the
>> same time.
> acpi_piix_eject_slot() is removal, not reclamation.  The plan there is
> that qdev_free calls the exit callback immediately (which can do
> qemu_bh_delete), and schedules an unref after the next RCU grace period.
>  At the next RCU grace period the BH will not be running, thus it is
> safe to finalize the object.
I have two question:
1st, who trigger qdev_free?  //Not figuring out the total design, but
I feel it is quite different from kernel's design, where refcnt->0,
then exit is called.
2nd, _delete_bh() means even there is any not-severed request, the
request will be canceled. Is it legal, and will not lose data(not
sure, since I do not know who will call qdev_free)?

> Paolo
>  And as explained, only two places can hold the
>> _bh_delete().
>> In a short word, with rcu, we need to constrain the calling idiom of
>> bh, i.e., putting them in reclaimer.  On the other hand, my patch try
>> to leave the calling idiom as it is, and handle this issue inside bh.
>> Regards,
>> Pingfan
>>> Paolo

reply via email to

[Prev in Thread] Current Thread [Next in Thread]