qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Block I/O outside the QEMU global mutex was "Re: [RFC P


From: Paolo Bonzini
Subject: Re: [Qemu-devel] Block I/O outside the QEMU global mutex was "Re: [RFC PATCH 00/17] Support for multiple "AIO contexts""
Date: Tue, 09 Oct 2012 15:50:15 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:15.0) Gecko/20120911 Thunderbird/15.0.1

Il 09/10/2012 15:21, Avi Kivity ha scritto:
> On 10/09/2012 03:11 PM, Paolo Bonzini wrote:
>>> But no, it's actually impossible.  Hotplug may be triggered from a vcpu
>>> thread, which clearly it can't be stopped.
>>
>> Hotplug should always be asynchronous (because that's how hardware
>> works), so it should always be possible to delegate the actual work to a
>> non-VCPU thread.  Or not?
> 
> The actual device deletion can happen from a different thread, as long
> as you isolate the device before.  That's part of the garbage collector
> idea.
> 
> vcpu thread:
>   rcu_read_lock
>   lookup
>   dispatch
>     mmio handler
>       isolate
>       queue(delete_work)
>   rcu_read_unlock
> 
> worker thread:
>   process queue
>     delete_work
>       synchronize_rcu() / stop_machine()
>       acquire qemu lock
>       delete object
>       drop qemu lock
> 
> Compared to the garbage collector idea, this drops fined-grained locking
> for the qdev tree, a significant advantage.  But it still suffers from
> dispatching inside the rcu critical section, which is something we want
> to avoid.

But we are not Linux, and I think the tradeoffs are different for RCU in
Linux vs. QEMU.

For CPUs in the kernel, running user code is just one way to get things
done; QEMU threads are much more event driven, and their whole purpose
is to either run the guest or sleep, until "something happens" (VCPU
exit or readable fd).  In other words, QEMU threads should be able to
stay most of the time in KVM_RUN or select() for any workload (to some
approximation).

Not just that: we do not need to minimize RCU critical sections, because
anyway we want to minimize the time spent in QEMU, period.

So I believe that to some approximation, in QEMU we can completely
ignore everything else, and behave as if threads were always under
rcu_read_lock(), except if in KVM_RUN/select.  KVM_RUN and select are
what Paul McKenney calls extended quiescent states, and in fact the
following mapping works:

    rcu_extended_quiesce_start()     -> rcu_read_unlock();
    rcu_extended_quiesce_end()       -> rcu_read_lock();
    rcu_read_lock/unlock()           -> nop

This in turn means that dispatching inside the RCU critical section is
not really bad.

Paolo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]