qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH] target-arm: protect cpu_exclusive_*.


From: Mark Burton
Subject: Re: [Qemu-devel] [RFC PATCH] target-arm: protect cpu_exclusive_*.
Date: Wed, 17 Dec 2014 12:12:27 +0100

> On 17 Dec 2014, at 11:45, Alexander Graf <address@hidden> wrote:
> 
> 
> 
> On 17.12.14 11:31, Mark Burton wrote:
>> 
>>> On 17 Dec 2014, at 11:28, Alexander Graf <address@hidden> wrote:
>>> 
>>> 
>>> 
>>> On 17.12.14 11:27, Frederic Konrad wrote:
>>>> On 16/12/2014 17:37, Peter Maydell wrote:
>>>>> On 16 December 2014 at 09:13,  <address@hidden> wrote:
>>>>>> From: KONRAD Frederic <address@hidden>
>>>>>> 
>>>>>> This adds a lock to avoid multiple exclusive access at the same time
>>>>>> in case of
>>>>>> TCG multithread.
>>>> Hi Peter,
>>>> 
>>>>> This feels to me like it's not really possible to review on
>>>>> its own, since you can't see how it fits into the design of
>>>>> the rest of the multithreading support.
>>>> true the only thing we observe is that it didn't change anything right now.
>>>> 
>>>>> The other approach here rather than having a pile of mutexes
>>>>> in the target-* code would be to have TCG IR support for
>>>>> "begin critical section"/"end critical section". Then you
>>>>> could have the main loop ensure that no other CPU is running
>>>>> at the same time as the critical-section code. (linux-user
>>>>> already has an ad-hoc implementation of this for the
>>>>> exclusives.)
>>>>> 
>>>>> -- PMM
>>>>> 
>>>> What do you mean by TCG IR?
>>> 
>>> TCP ops. The nice thing is that TCG could translate those into
>>> transactions if the host supports them as well.
>>> 
>> 
>> Hows that different in reality from what we have now?
>> Cheers
>> Mark.
> 
> The current code can't optimize things in TCG. There's a good chance
> your TCG host implementation can have an optimization pass that creates
> host cmpxchg instructions or maybe even transaction blocks out of the
> critical sections.
> 
> 


Ok - I get it - I see the value - so long as it’s possible to do. It would 
solve a lot of problems...

We were not (yet) trying to fix that, we were simply asking the question, if we 
add these mutex’s - do we have any detrimental impact on anything.
Seems like the answer is that adding the mutex’s is fine - it doesn’t seem to 
have a performance impact or anything. Good.

But - I see what you mean - if we implemented this as an op, then it would be 
much simpler to optimise/fix properly afterwards - and - that “fix” might not 
even need to deal with the whole memory chain issue - maybe….. 

Cheers

Mark.









reply via email to

[Prev in Thread] Current Thread [Next in Thread]