qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH 3/4] include/qemu/atomic.h: default to __ato


From: Alex Bennée
Subject: Re: [Qemu-devel] [RFC PATCH 3/4] include/qemu/atomic.h: default to __atomic functions
Date: Mon, 25 Jan 2016 18:23:26 +0000
User-agent: mu4e 0.9.17; emacs 25.0.50.6

Paolo Bonzini <address@hidden> writes:

> On 25/01/2016 17:49, Alex Bennée wrote:
>> The __atomic primitives have been available since GCC 4.7 and provide
>> a richer interface for describing memory ordering requirements. As a
>> bonus by using the primitives instead of hand-rolled functions we can
>> use tools such as the AddressSanitizer which need the use of well
>> defined APIs for its analysis.
>>
>> If we have __ATOMIC defines we exclusively use the __atomic primitives
>> for all our atomic access. Otherwise we fall back to the mixture of
>> __sync and hand-rolled barrier cases.
>>
>> Signed-off-by: Alex Bennée <address@hidden>
>> ---
>>  include/qemu/atomic.h | 126 
>> ++++++++++++++++++++++++++++++++------------------
>>  1 file changed, 82 insertions(+), 44 deletions(-)
>>
>> diff --git a/include/qemu/atomic.h b/include/qemu/atomic.h
>> index bd2c075..414c81a 100644
>> --- a/include/qemu/atomic.h
>> +++ b/include/qemu/atomic.h
>> @@ -15,12 +15,90 @@
>>
>>  #include "qemu/compiler.h"
>>
>> -/* For C11 atomic ops */
>>
>>  /* Compiler barrier */
>>  #define barrier()   ({ asm volatile("" ::: "memory"); (void)0; })
>>
>> -#ifndef __ATOMIC_RELAXED
>> +#ifdef __ATOMIC_RELAXED
>> +/* For C11 atomic ops */
>> +
>> +/* Manual memory barriers
>> + *
>> + *__atomic_thread_fence does not include a compiler barrier; instead,
>> + * the barrier is part of __atomic_load/__atomic_store's "volatile-like"
>> + * semantics. If smp_wmb() is a no-op, absence of the barrier means that
>> + * the compiler is free to reorder stores on each side of the barrier.
>> + * Add one here, and similarly in smp_rmb() and smp_read_barrier_depends().
>> + */
>> +
>> +#define smp_mb()    ({ barrier(); __atomic_thread_fence(__ATOMIC_CONSUME); 
>> barrier(); })
>
> This should be seq_cst.

heh when I did this originally I had it as that (it was an overly
complex splitting into handrolled, __seq and __atomic implementations.)
I'll fix it. I think I can remove the barrier()'s as well can't I?

>
>> +#define smp_wmb()   ({ barrier(); __atomic_thread_fence(__ATOMIC_RELEASE); 
>> barrier(); })
>> +#define smp_rmb()   ({ barrier(); __atomic_thread_fence(__ATOMIC_ACQUIRE); 
>> barrier(); })
>> +
>> +#define smp_read_barrier_depends() ({ barrier(); 
>> __atomic_thread_fence(__ATOMIC_CONSUME); barrier(); })
>> +
>> +/* Weak atomic operations prevent the compiler moving other
>> + * loads/stores past the atomic operation load/store.
>> + */
>> +#define atomic_read(ptr)                          \
>> +    ({                                            \
>> +    typeof(*ptr) _val;                            \
>> +     __atomic_load(ptr, &_val, __ATOMIC_CONSUME); \
>
> This should be relaxed.
>
>> +    _val;                                         \
>> +    })
>> +
>> +#define atomic_rcu_read(ptr)    atomic_read(ptr)
>
> This should be consume.
>
>> +
>> +#define atomic_set(ptr, i)  do {                        \
>> +    typeof(*ptr) _val = (i);                            \
>> +    __atomic_store(ptr, &_val, __ATOMIC_RELEASE);       \
>
> This should be relaxed.
>
>> +} while(0)
>> +
>> +#define atomic_rcu_set(ptr, i)  atomic_set(ptr, i)
>
> This should be release.
>
>> +/* Sequentially consistent atomic access */
>> +
>> +#define atomic_xchg(ptr, i)    ({                           \
>> +    typeof(*ptr) _new = (i), _old;                          \
>> +    __atomic_exchange(ptr, &_new, &_old, __ATOMIC_SEQ_CST); \
>> +    _old;                                                   \
>> +})
>> +
>> +/* Returns the eventual value, failed or not */
>> +#define atomic_cmpxchg(ptr, old, new)                                   \
>> +    ({                                                                  \
>> +    typeof(*ptr) _old = (old), _new = (new);                            \
>> +    __atomic_compare_exchange(ptr, &_old, &_new, false,                 \
>> +                              __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST);      \
>> +    *ptr; /* can this race if cmpxchg not used elsewhere? */            \
>
> If I read the manual correctly, you can return _old here (that's why it
> is a pointer).
>
>> +    })
>> +
>> +#define atomic_mb_set(ptr, i)   ((void)atomic_xchg(ptr, i))
>> +#define atomic_mb_read(ptr)                             \
>> +    ({                                                  \
>> +    typeof(*ptr) _val;                                  \
>> +     __atomic_load(ptr, &_val, __ATOMIC_ACQUIRE);       \
>> +    _val;                                               \
>> +    })
>
> Please leave these defined in terms of relaxed accesses and memory
> barriers.

May I ask why? Doesn't the __ATOMIC_ACQUIRE ensure the load barrier is
in place? Or should it be a fill CST barrier?

>
> Paolo


--
Alex Bennée



reply via email to

[Prev in Thread] Current Thread [Next in Thread]