qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] [PATCH RFC 3/4] target-ppc: use atomic_cmpxchg for ld/st


From: David Gibson
Subject: Re: [Qemu-ppc] [PATCH RFC 3/4] target-ppc: use atomic_cmpxchg for ld/st reservation
Date: Wed, 7 Sep 2016 14:02:52 +1000
User-agent: Mutt/1.7.0 (2016-08-17)

On Fri, Sep 02, 2016 at 12:02:55PM +0530, Nikunj A Dadhania wrote:
> Signed-off-by: Nikunj A Dadhania <address@hidden>

This really needs a comment indicating that this implementation isn't
strictly correct (although probably good enough in practice).
Specifically a racing store which happens to store the same value
which was already in memory should clobber the reservation, but won't
with this implementation.

I had a long discussion at KVM Forum with Emilio Costa about this, in
which I discovered just how hard it is to strictly implement
store-conditional semantics in terms of anything else.  So, this is
probably a reasonable substitute, but we should note the fact that
it's not 100%.

> ---
>  target-ppc/translate.c | 24 +++++++++++++++++++++---
>  1 file changed, 21 insertions(+), 3 deletions(-)
> 
> diff --git a/target-ppc/translate.c b/target-ppc/translate.c
> index 4a882b3..447c13e 100644
> --- a/target-ppc/translate.c
> +++ b/target-ppc/translate.c
> @@ -72,6 +72,7 @@ static TCGv cpu_cfar;
>  #endif
>  static TCGv cpu_xer, cpu_so, cpu_ov, cpu_ca;
>  static TCGv cpu_reserve;
> +static TCGv cpu_reserve_val;
>  static TCGv cpu_fpscr;
>  static TCGv_i32 cpu_access_type;
>  
> @@ -176,6 +177,9 @@ void ppc_translate_init(void)
>      cpu_reserve = tcg_global_mem_new(cpu_env,
>                                       offsetof(CPUPPCState, reserve_addr),
>                                       "reserve_addr");
> +    cpu_reserve_val = tcg_global_mem_new(cpu_env,
> +                                     offsetof(CPUPPCState, reserve_val),
> +                                     "reserve_val");
>  
>      cpu_fpscr = tcg_global_mem_new(cpu_env,
>                                     offsetof(CPUPPCState, fpscr), "fpscr");
> @@ -3086,7 +3090,7 @@ static void gen_##name(DisasContext *ctx)               
>              \
>      }                                                                \
>      tcg_gen_qemu_ld_tl(gpr, t0, ctx->mem_idx, memop);                \
>      tcg_gen_mov_tl(cpu_reserve, t0);                                 \
> -    tcg_gen_st_tl(gpr, cpu_env, offsetof(CPUPPCState, reserve_val)); \
> +    tcg_gen_mov_tl(cpu_reserve_val, gpr);                            \
>      tcg_temp_free(t0);                                               \
>  }
>  
> @@ -3112,14 +3116,28 @@ static void gen_conditional_store(DisasContext *ctx, 
> TCGv EA,
>                                    int reg, int memop)
>  {
>      TCGLabel *l1;
> +    TCGv_i32 tmp = tcg_temp_local_new_i32();
> +    TCGv t0;
>  
> +    tcg_gen_movi_i32(tmp, 0);
>      tcg_gen_trunc_tl_i32(cpu_crf[0], cpu_so);
>      l1 = gen_new_label();
>      tcg_gen_brcond_tl(TCG_COND_NE, EA, cpu_reserve, l1);
> -    tcg_gen_ori_i32(cpu_crf[0], cpu_crf[0], 1 << CRF_EQ);
> -    tcg_gen_qemu_st_tl(cpu_gpr[reg], EA, ctx->mem_idx, memop);
> +
> +    t0 = tcg_temp_new();
> +    tcg_gen_atomic_cmpxchg_tl(t0, EA, cpu_reserve_val, cpu_gpr[reg],
> +                              ctx->mem_idx, DEF_MEMOP(memop));
> +    tcg_gen_setcond_tl(TCG_COND_EQ, t0, t0, cpu_reserve_val);
> +    tcg_gen_trunc_tl_i32(tmp, t0);
> +
>      gen_set_label(l1);
> +    tcg_gen_shli_i32(tmp, tmp, CRF_EQ);
> +    tcg_gen_or_i32(cpu_crf[0], cpu_crf[0], tmp);
>      tcg_gen_movi_tl(cpu_reserve, -1);
> +    tcg_gen_movi_tl(cpu_reserve_val, 0);
> +
> +    tcg_temp_free(t0);
> +    tcg_temp_free_i32(tmp);
>  }
>  #endif
>  

-- 
David Gibson                    | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
                                | _way_ _around_!
http://www.ozlabs.org/~dgibson

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]