[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PATCH v9 38/46] target/arm: Complete TBI clearing for user-only for SVE
From: |
Richard Henderson |
Subject: |
[PATCH v9 38/46] target/arm: Complete TBI clearing for user-only for SVE |
Date: |
Thu, 25 Jun 2020 20:31:36 -0700 |
There are a number of paths by which the TBI is still intact
for user-only in the SVE helpers.
Because we currently always set TBI for user-only, we do not
need to pass down the actual TBI setting from above, and we
can remove the top byte in the inner-most primitives, so that
none are forgotten. Moreover, this keeps the "dirty" pointer
around at the higher levels, where we need it for any MTE checking.
Since the normal case, especially for user-only, goes through
RAM, this clearing merely adds two insns per page lookup, which
will be completely in the noise.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
v9: Added an assert for tbi in aarch64_tr_init_disas_context (pmm)
---
target/arm/cpu.c | 3 +++
target/arm/sve_helper.c | 19 +++++++++++++++++--
target/arm/translate-a64.c | 5 +++++
3 files changed, 25 insertions(+), 2 deletions(-)
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index d9876337c0..afe81e9b6c 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -203,6 +203,9 @@ static void arm_cpu_reset(DeviceState *dev)
* Enable TBI0 and TBI1. While the real kernel only enables TBI0,
* turning on both here will produce smaller code and otherwise
* make no difference to the user-level emulation.
+ *
+ * In sve_probe_page, we assume that this is set.
+ * Do not modify this without other changes.
*/
env->cp15.tcr_el[1].raw_tcr = (3ULL << 37);
#else
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index ad974c2cc5..382fa82bc8 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -3966,14 +3966,16 @@ static void sve_##NAME##_host(void *vd, intptr_t
reg_off, void *host) \
static void sve_##NAME##_tlb(CPUARMState *env, void *vd, intptr_t reg_off, \
target_ulong addr, uintptr_t ra) \
{ \
- *(TYPEE *)(vd + H(reg_off)) = (TYPEM)TLB(env, addr, ra); \
+ *(TYPEE *)(vd + H(reg_off)) = \
+ (TYPEM)TLB(env, useronly_clean_ptr(addr), ra); \
}
#define DO_ST_TLB(NAME, H, TYPEE, TYPEM, TLB) \
static void sve_##NAME##_tlb(CPUARMState *env, void *vd, intptr_t reg_off, \
target_ulong addr, uintptr_t ra) \
{ \
- TLB(env, addr, (TYPEM)*(TYPEE *)(vd + H(reg_off)), ra); \
+ TLB(env, useronly_clean_ptr(addr), \
+ (TYPEM)*(TYPEE *)(vd + H(reg_off)), ra); \
}
#define DO_LD_PRIM_1(NAME, H, TE, TM) \
@@ -4091,6 +4093,19 @@ static bool sve_probe_page(SVEHostPage *info, bool
nofault,
int flags;
addr += mem_off;
+
+ /*
+ * User-only currently always issues with TBI. See the comment
+ * above useronly_clean_ptr. Usually we clean this top byte away
+ * during translation, but we can't do that for e.g. vector + imm
+ * addressing modes.
+ *
+ * We currently always enable TBI for user-only, and do not provide
+ * a way to turn it off. So clean the pointer unconditionally here,
+ * rather than look it up here, or pass it down from above.
+ */
+ addr = useronly_clean_ptr(addr);
+
flags = probe_access_flags(env, addr, access_type, mmu_idx, nofault,
&info->host, retaddr);
info->flags = flags;
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index e46c4a49e0..c20af6ee9d 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -14634,6 +14634,11 @@ static void
aarch64_tr_init_disas_context(DisasContextBase *dcbase,
dc->features = env->features;
dc->dcz_blocksize = arm_cpu->dcz_blocksize;
+#ifdef CONFIG_USER_ONLY
+ /* In sve_probe_page, we assume TBI is enabled. */
+ tcg_debug_assert(dc->tbid & 1);
+#endif
+
/* Single step state. The code-generation logic here is:
* SS_ACTIVE == 0:
* generate code with no special handling for single-stepping (except
--
2.25.1
- [PATCH v9 28/46] target/arm: Use mte_checkN for sve unpredicated loads, (continued)
- [PATCH v9 28/46] target/arm: Use mte_checkN for sve unpredicated loads, Richard Henderson, 2020/06/25
- [PATCH v9 29/46] target/arm: Use mte_checkN for sve unpredicated stores, Richard Henderson, 2020/06/25
- [PATCH v9 27/46] target/arm: Add helper_mte_check_zva, Richard Henderson, 2020/06/25
- [PATCH v9 30/46] target/arm: Use mte_check1 for sve LD1R, Richard Henderson, 2020/06/25
- [PATCH v9 31/46] target/arm: Tidy trans_LD1R_zpri, Richard Henderson, 2020/06/25
- [PATCH v9 32/46] target/arm: Add arm_tlb_bti_gp, Richard Henderson, 2020/06/25
- [PATCH v9 33/46] target/arm: Add mte helpers for sve scalar + int loads, Richard Henderson, 2020/06/25
- [PATCH v9 34/46] target/arm: Add mte helpers for sve scalar + int stores, Richard Henderson, 2020/06/25
- [PATCH v9 35/46] target/arm: Add mte helpers for sve scalar + int ff/nf loads, Richard Henderson, 2020/06/25
- [PATCH v9 36/46] target/arm: Handle TBI for sve scalar + int memory ops, Richard Henderson, 2020/06/25
- [PATCH v9 38/46] target/arm: Complete TBI clearing for user-only for SVE,
Richard Henderson <=
- [PATCH v9 39/46] target/arm: Implement data cache set allocation tags, Richard Henderson, 2020/06/25
- [PATCH v9 40/46] target/arm: Set PSTATE.TCO on exception entry, Richard Henderson, 2020/06/25
- [PATCH v9 37/46] target/arm: Add mte helpers for sve scatter/gather memory ops, Richard Henderson, 2020/06/25
- [PATCH v9 41/46] target/arm: Always pass cacheattr to get_phys_addr, Richard Henderson, 2020/06/25
- [PATCH v9 42/46] target/arm: Cache the Tagged bit for a page in MemTxAttrs, Richard Henderson, 2020/06/25
- [PATCH v9 43/46] target/arm: Create tagged ram when MTE is enabled, Richard Henderson, 2020/06/25
- [PATCH v9 44/46] target/arm: Add allocation tag storage for system mode, Richard Henderson, 2020/06/25
- [PATCH v9 45/46] target/arm: Enable MTE, Richard Henderson, 2020/06/25