qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] global_mutex and multithread.


From: Paolo Bonzini
Subject: Re: [Qemu-devel] global_mutex and multithread.
Date: Thu, 15 Jan 2015 22:41:05 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.3.0


On 15/01/2015 21:53, Mark Burton wrote:
>> Jan said he had it working at least on ARM (MusicPal).
> 
> yeah - our problem is when we enable multi-threads - which I dont believe Jan 
> did…

Multithreaded TCG, or single-threaded TCG with SMP?

>>> One thing I wonder - why do we need to go to the extent of mutexing
>>> in the TCG like this? Why can’t you simply put a mutex get/release on
>>> the slow path? If the core is going to do ‘fast path’ access to the
>>> memory, - even if that memory was IO mapped - would it matter if it
>>> didn’t have the mutex?
>>
>> Because there is no guarantee that the memory map isn't changed by a
>> core under the feet of another.  The TLB (in particular the "iotlb") is
>> only valid with reference to a particular memory map.
> 
>>
>> Changes to the memory map certainly happen in the slow path, but lookups
>> are part of the fast path.  Even an rwlocks is too slow for a fast path,
>> hence the plan of going with RCU.
> 
> Could we arrange the world such that lookups ‘succeed’ (the wheels
> dont fall off) -ether getting the old value, or the new, but not getting
> rubbish - and we still only take the mutex if we are going to make
> alterations to the MM itself? (I have’t looked at the code around that…
> so sorry if the question is ridiculous).

That's the definition of RCU. :)  Look at the docs in
http://permalink.gmane.org/gmane.comp.emulators.qemu/313929 for more
information. :)

It's still not trivial to make it 100% correct, but at the same time
it's not too hard to prepare something decent to play with.  Also, most
of the work can be done with KVM so it's more or less independent from
what you guys have been doing so far.

Paolo




reply via email to

[Prev in Thread] Current Thread [Next in Thread]