qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Fwd: [RFC] Adding multithreads to LDREX/STREX.


From: Paolo Bonzini
Subject: Re: [Qemu-devel] Fwd: [RFC] Adding multithreads to LDREX/STREX.
Date: Tue, 03 Mar 2015 18:09:12 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.5.0


On 03/03/2015 17:47, Mark Burton wrote:
> +inline void arm_exclusive_lock(void)
> +{
> +    if (!cpu_have_exclusive_lock) {
> +        qemu_mutex_lock(&cpu_exclusive_lock);
> +        cpu_have_exclusive_lock = true;
> +    }
> +}
> +
> +inline void arm_exclusive_unlock(void)
> +{
> +    if (cpu_have_exclusive_lock) {
> +        cpu_have_exclusive_lock = false;
> +        qemu_mutex_unlock(&cpu_exclusive_lock);
> +    }
> +}
> +

This is almost but not quite a recursive mutex.  What about changing it
like this:

- arm_exclusive_lock just takes the lock and sets the flag; no "if"

- arm_exclusive_unlock does the opposite, again no "if"

- raise_exception checks the flag and skips "arm_exclusive_lock()" if
already set

The only other question I have is this:

> +    gen_helper_atomic_check(success, cpu_env, addr);
> +    tcg_gen_brcondi_i32(TCG_COND_EQ, success, 0, done_label);

Are you setting cpu_R[rd] correctly in this case?  That is, should you
be jumping to fail_label instead?  That could case a failure to be
reported as a success.

Paolo

> +    tcg_temp_free_i32(success);
> +
> +    /* Store shoudl be OK lets check the value */
> +    tmp = load_reg(s, rt);
> +    TCGv_i64 val64=tcg_temp_new_i64();
>     switch (size) {
>     case 0:
>         gen_aa32_ld8u(tmp, addr, get_mem_index(s));
> @@ -7450,21 +7447,19 @@ static void gen_store_exclusive(DisasContext *s,
> int rd, int rt, int rt2,
>         abort();
>     }
> 
> -    val64 = tcg_temp_new_i64();
>     if (size == 3) {
>         TCGv_i32 tmp2 = tcg_temp_new_i32();
>         TCGv_i32 tmp3 = tcg_temp_new_i32();
> +
>         tcg_gen_addi_i32(tmp2, addr, 4);
>         gen_aa32_ld32u(tmp3, tmp2, get_mem_index(s));
> -        tcg_temp_free_i32(tmp2);
>         tcg_gen_concat_i32_i64(val64, tmp, tmp3);
> -        tcg_temp_free_i32(tmp3);
> +        tcg_temp_free_i32(tmp2);
>     } else {
> -        tcg_gen_extu_i32_i64(val64, tmp);
> +      tcg_gen_extu_i32_i64(val64, tmp);
>     }
>     tcg_temp_free_i32(tmp);
> -
> -    tcg_gen_brcond_i64(TCG_COND_NE, val64, cpu_exclusive_val, fail_label);
> +    tcg_gen_brcond_i64(TCG_COND_NE, cpu_exclusive_val, val64, fail_label);
>     tcg_temp_free_i64(val64);
> 
>     tmp = load_reg(s, rt);
> @@ -7489,12 +7484,16 @@ static void gen_store_exclusive(DisasContext *s,
> int rd, int rt, int rt2,
>         gen_aa32_st32(tmp, addr, get_mem_index(s));
>         tcg_temp_free_i32(tmp);
>     }
> +
> +    gen_helper_atomic_release(cpu_env);
>     tcg_gen_movi_i32(cpu_R[rd], 0);
>     tcg_gen_br(done_label);
> +
>     gen_set_label(fail_label);
> +    gen_helper_atomic_release(cpu_env);
>     tcg_gen_movi_i32(cpu_R[rd], 1);
> +
>     gen_set_label(done_label);
> -    tcg_gen_movi_i64(cpu_exclusive_addr, -1);



reply via email to

[Prev in Thread] Current Thread [Next in Thread]