qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC 01/10] target-arm: protect cpu_exclusive_*.


From: Alexander Graf
Subject: Re: [Qemu-devel] [RFC 01/10] target-arm: protect cpu_exclusive_*.
Date: Thu, 26 Feb 2015 21:36:40 +0100



> Am 26.02.2015 um 19:09 schrieb Frederic Konrad <address@hidden>:
> 
>> On 29/01/2015 16:17, Peter Maydell wrote:
>>> On 16 January 2015 at 17:19,  <address@hidden> wrote:
>>> From: KONRAD Frederic <address@hidden>
>>> 
>>> This adds a lock to avoid multiple exclusive access at the same time in 
>>> case of
>>> TCG multithread.
>>> 
>>> Signed-off-by: KONRAD Frederic <address@hidden>
>> All the same comments I had on this patch earlier still apply:
>> 
>>  * I think adding mutex handling code to all the target-*
>>    frontends rather than providing facilities in common
>>    code for them to use is the wrong approach
>>  * You will fail to unlock the mutex if the ldrex or strex
>>    takes a data abort
>>  * This is making no attempt to learn from or unify with
>>    the existing attempts at handling exclusives in linux-user.
>>    When we've done this work we should have a single
>>    mechanism for handling exclusives in a multithreaded
>>    host environment which is used by both softmmu and useronly
>>    configs
>> 
>> thanks
>> -- PMM
> 
> Hi,
> 
> We see some performance improvment on this SMP, but still have random deadlock
> in the guest in this function:
> 
> static inline void arch_spin_lock(arch_spinlock_t *lock)
> {
>    unsigned long tmp;
>    u32 newval;
>    arch_spinlock_t lockval;
> 
>    prefetchw(&lock->slock);
>    __asm__ __volatile__(
> "1:    ldrex    %0, [%3]\n"
> "    add    %1, %0, %4\n"
> "    strex    %2, %1, [%3]\n"
> "    teq    %2, #0\n"
> "    bne    1b"
>    : "=&r" (lockval), "=&r" (newval), "=&r" (tmp)
>    : "r" (&lock->slock), "I" (1 << TICKET_SHIFT)
>    : "cc");
> 
>    while (lockval.tickets.next != lockval.tickets.owner) {
>        wfe();
>        lockval.tickets.owner = ACCESS_ONCE(lock->tickets.owner);
>    }
> 
>    smp_mb();
> }
> 
> In the last while loop. That's why we though immediately that it is caused by 
> our
> implementation of atomic instructions.
> 
> We failed to unlock the mutex a lot of time during the boot, not because of 
> data
> abort but I think because we can be interrupted during a strex (eg: there are 
> two
> branches)?
> 
> We decided to implement the whole atomic instruction inside an helper but is 
> that
> possible to get the data with eg: cpu_physical_memory_rw instead of the normal
> generated code?

Yes, but make sure you store pc in env before you enter the helper.

Alex

> 
> One other thing which looks suspicious it seems there is one pair of
> exclusive_addr/exclusive_val per CPU is that normal?
> 
> Thanks,
> Fred



reply via email to

[Prev in Thread] Current Thread [Next in Thread]