qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC 01/10] target-arm: protect cpu_exclusive_*.


From: Mark Burton
Subject: Re: [Qemu-devel] [RFC 01/10] target-arm: protect cpu_exclusive_*.
Date: Fri, 27 Feb 2015 08:54:53 +0100

> On 26 Feb 2015, at 23:56, Peter Maydell <address@hidden> wrote:
> 
> On 27 February 2015 at 03:09, Frederic Konrad <address@hidden> wrote:
>> On 29/01/2015 16:17, Peter Maydell wrote:
>>> 
>>> On 16 January 2015 at 17:19,  <address@hidden> wrote:
>>>> 
>>>> From: KONRAD Frederic <address@hidden>
>>>> 
>>>> This adds a lock to avoid multiple exclusive access at the same time in
>>>> case of
>>>> TCG multithread.
> 
>>> All the same comments I had on this patch earlier still apply:
>>> 
>>>  * I think adding mutex handling code to all the target-*
>>>    frontends rather than providing facilities in common
>>>    code for them to use is the wrong approach
>>>  * You will fail to unlock the mutex if the ldrex or strex
>>>    takes a data abort
>>>  * This is making no attempt to learn from or unify with
>>>    the existing attempts at handling exclusives in linux-user.
>>>    When we've done this work we should have a single
>>>    mechanism for handling exclusives in a multithreaded
>>>    host environment which is used by both softmmu and useronly
>>>    configs
> 
>> We decided to implement the whole atomic instruction inside an helper
> 
> ...which is a different approach which still isn't really
> addressing any of my remarks in the list above…

We agree on the above point. For atomic instructions, I think we discussed at 
length what to do. However we chose to ‘ignore’ the problem for now and to 
‘hack’ something just to get it working initially. Basically at this stage we 
want something mostly working so we can work on individual bits of the code and 
address them in more careful detail. Our issue with Atomic-ness is that the 
‘hack’ we put in place seems to allow a race condition, and we can’t see why :-(
But - overall, the plan is absolutely to provide a better implementation….

> 
>> but is
>> that
>> possible to get the data with eg: cpu_physical_memory_rw instead of the
>> normal
>> generated code?
> 
> cpu_physical_memory_rw would bypass the TLB and so be much slower.
> Make sure you use the functions which go via the TLB if you do
> this in a helper (and remember that they will longjmp out on a
> tlb miss!)

At this point speed isn’t our main concern - it’s simplicity of implementation 
- we want it to work, then we can worry about a better implementation (which 
likely should not go this path at all - as discussed above).
Given that - isn’t it reasonable to pass through cpu_physical_memory_rw - and 
hence not have to worry about the long jump ? Or am I missing something?

> 
>> One other thing which looks suspicious it seems there is one pair of
>> exclusive_addr/exclusive_val per CPU is that normal?
> 
> Pretty sure we've already discussed how the current ldrex/strex
> implementation is not architecturally correct. I think this is
> another of those areas.

We have indeed discussed this - but this is a surprise. What  we’ve found is 
that the ‘globals’ (which, when we discussed it, we assumed were indeed global) 
seem not to be global at all. 
if 2 CPU’s both wanted to write the same strex, and both wrote the address and 
current values to these variables, right now I believe it would be 
theoretically possible that both CPU’s would end up successfully completing the 
strex. If the TB exited on the second branch, I suspect you could have a race 
condition which would lead to a failure? However I guess this is unlikely.


> 
> In general I'd be much happier seeing a proper sketch of your
> design, what data structures etc you intend to share between
> CPUs and which are per-CPU, what generic mechanisms you plan
> to provide to allow targets to implement atomic instructions, etc.
> It's quite hard to see the whole picture at the moment.


Agreed - at this point - as I say - we are trying to get something to be ‘just' 
stable enough that we can then use it to move forward from.
But I totally agree - we should be clearer about the overall picture, (once we 
can see the wood for the trees)

Cheers

Mark.


> 
> -- PMM


         +44 (0)20 7100 3485 x 210
 +33 (0)5 33 52 01 77x 210

        +33 (0)603762104
        mark.burton




reply via email to

[Prev in Thread] Current Thread [Next in Thread]