[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Qemu-devel] [RFC 02/10] softmmu_llsc_template.h: Move to multi-threadin
From: |
Alvise Rigo |
Subject: |
[Qemu-devel] [RFC 02/10] softmmu_llsc_template.h: Move to multi-threading |
Date: |
Thu, 26 May 2016 18:35:41 +0200 |
Using tcg_exclusive_{lock,unlock}(), make the emulation of
LoadLink/StoreConditional thread safe.
During an LL access, this lock protects the load access itself, the
update of the exclusive history and the update of the VCPU's protected
range. In a SC access, the lock protects the store access itself, the
possible reset of other VCPUs' protected range and the reset of the
exclusive context of calling VCPU.
The lock is also taken when a normal store happens to access an
exclusive page to reset other VCPUs' protected range in case of
collision.
Moreover, adapt target-arm to also cope with the new multi-threaded
execution.
Signed-off-by: Alvise Rigo <address@hidden>
---
softmmu_llsc_template.h | 11 +++++++++--
softmmu_template.h | 6 ++++++
target-arm/op_helper.c | 6 ++++++
3 files changed, 21 insertions(+), 2 deletions(-)
diff --git a/softmmu_llsc_template.h b/softmmu_llsc_template.h
index 2c4a494..d3810c0 100644
--- a/softmmu_llsc_template.h
+++ b/softmmu_llsc_template.h
@@ -62,11 +62,13 @@ WORD_TYPE helper_ldlink_name(CPUArchState *env,
target_ulong addr,
hwaddr hw_addr;
unsigned mmu_idx = get_mmuidx(oi);
+ index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
+
+ tcg_exclusive_lock();
+
/* Use the proper load helper from cpu_ldst.h */
ret = helper_ld(env, addr, oi, retaddr);
- index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
-
/* hw_addr = hwaddr of the page (i.e. section->mr->ram_addr + xlat)
* plus the offset (i.e. addr & ~TARGET_PAGE_MASK) */
hw_addr = (env->iotlb[mmu_idx][index].addr & TARGET_PAGE_MASK) + addr;
@@ -95,6 +97,8 @@ WORD_TYPE helper_ldlink_name(CPUArchState *env, target_ulong
addr,
cc->cpu_set_excl_protected_range(this_cpu, hw_addr, DATA_SIZE);
+ tcg_exclusive_unlock();
+
/* From now on we are in LL/SC context */
this_cpu->ll_sc_context = true;
@@ -114,6 +118,8 @@ WORD_TYPE helper_stcond_name(CPUArchState *env,
target_ulong addr,
* access as one made by the store conditional wrapper. If the store
* conditional does not succeed, the value will be set to 0.*/
cpu->excl_succeeded = true;
+
+ tcg_exclusive_lock();
helper_st(env, addr, val, oi, retaddr);
if (cpu->excl_succeeded) {
@@ -123,6 +129,7 @@ WORD_TYPE helper_stcond_name(CPUArchState *env,
target_ulong addr,
/* Unset LL/SC context */
cc->cpu_reset_excl_context(cpu);
+ tcg_exclusive_unlock();
return ret;
}
diff --git a/softmmu_template.h b/softmmu_template.h
index 76fe37e..9363a7b 100644
--- a/softmmu_template.h
+++ b/softmmu_template.h
@@ -537,11 +537,16 @@ static inline void
smmu_helper(do_excl_store)(CPUArchState *env,
}
}
+ /* Take the lock in case we are not coming from a SC */
+ tcg_exclusive_lock();
+
smmu_helper(do_ram_store)(env, little_endian, val, addr, oi,
get_mmuidx(oi), index, retaddr);
reset_other_cpus_colliding_ll_addr(hw_addr, DATA_SIZE);
+ tcg_exclusive_unlock();
+
return;
}
@@ -572,6 +577,7 @@ void helper_le_st_name(CPUArchState *env, target_ulong
addr, DATA_TYPE val,
/* Handle an IO access or exclusive access. */
if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) {
if (tlb_addr & TLB_EXCL) {
+
smmu_helper(do_excl_store)(env, true, val, addr, oi, index,
retaddr);
return;
diff --git a/target-arm/op_helper.c b/target-arm/op_helper.c
index e22afc5..19ea52d 100644
--- a/target-arm/op_helper.c
+++ b/target-arm/op_helper.c
@@ -35,7 +35,9 @@ static void raise_exception(CPUARMState *env, uint32_t excp,
cs->exception_index = excp;
env->exception.syndrome = syndrome;
env->exception.target_el = target_el;
+ tcg_exclusive_lock();
cc->cpu_reset_excl_context(cs);
+ tcg_exclusive_unlock();
cpu_loop_exit(cs);
}
@@ -58,7 +60,9 @@ void HELPER(atomic_clear)(CPUARMState *env)
CPUState *cs = ENV_GET_CPU(env);
CPUClass *cc = CPU_GET_CLASS(cs);
+ tcg_exclusive_lock();
cc->cpu_reset_excl_context(cs);
+ tcg_exclusive_unlock();
}
uint32_t HELPER(neon_tbl)(CPUARMState *env, uint32_t ireg, uint32_t def,
@@ -874,7 +878,9 @@ void HELPER(exception_return)(CPUARMState *env)
aarch64_save_sp(env, cur_el);
+ tcg_exclusive_lock();
cc->cpu_reset_excl_context(cs);
+ tcg_exclusive_unlock();
/* We must squash the PSTATE.SS bit to zero unless both of the
* following hold:
--
2.8.3
- [Qemu-devel] [RFC 00/10] MTTCG: Slow-path for atomic insns, Alvise Rigo, 2016/05/26
- [Qemu-devel] [RFC 01/10] exec: Introduce tcg_exclusive_{lock, unlock}(), Alvise Rigo, 2016/05/26
- [Qemu-devel] [RFC 04/10] cputlb: Introduce tlb_flush_other(), Alvise Rigo, 2016/05/26
- [Qemu-devel] [RFC 03/10] cpus: Introduce async_wait_run_on_cpu(), Alvise Rigo, 2016/05/26
- [Qemu-devel] [RFC 02/10] softmmu_llsc_template.h: Move to multi-threading,
Alvise Rigo <=
- [Qemu-devel] [RFC 05/10] target-arm: End TB after ldrex instruction, Alvise Rigo, 2016/05/26
- [Qemu-devel] [RFC 06/10] cputlb: Add tlb_tables_flush_bitmap(), Alvise Rigo, 2016/05/26
- [Qemu-devel] [RFC 09/10] cputlb: Query tlb_flush_page_all, Alvise Rigo, 2016/05/26
- [Qemu-devel] [RFC 07/10] cputlb: Query tlb_flush_by_mmuidx, Alvise Rigo, 2016/05/26
- [Qemu-devel] [RFC 08/10] cputlb: Query tlb_flush_page_by_mmuidx, Alvise Rigo, 2016/05/26
- [Qemu-devel] [RFC 10/10] cpus: Do not sleep if some work item is pending, Alvise Rigo, 2016/05/26