[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PULL 31/44] target/arm: Implement MVE VPNOT
From: |
Peter Maydell |
Subject: |
[PULL 31/44] target/arm: Implement MVE VPNOT |
Date: |
Wed, 25 Aug 2021 11:35:21 +0100 |
Implement the MVE VPNOT insn, which inverts the bits in VPR.P0
(subject to both predication and to beatwise execution).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/helper-mve.h | 1 +
target/arm/mve.decode | 1 +
target/arm/mve_helper.c | 17 +++++++++++++++++
target/arm/translate-mve.c | 19 +++++++++++++++++++
4 files changed, 38 insertions(+)
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
index 651020aaad8..8cb941912fc 100644
--- a/target/arm/helper-mve.h
+++ b/target/arm/helper-mve.h
@@ -119,6 +119,7 @@ DEF_HELPER_FLAGS_4(mve_vorn, TCG_CALL_NO_WG, void, env,
ptr, ptr, ptr)
DEF_HELPER_FLAGS_4(mve_veor, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
DEF_HELPER_FLAGS_4(mve_vpsel, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
+DEF_HELPER_FLAGS_1(mve_vpnot, TCG_CALL_NO_WG, void, env)
DEF_HELPER_FLAGS_4(mve_vaddb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
DEF_HELPER_FLAGS_4(mve_vaddh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
index 774ee2a1a5b..40bd0c04b59 100644
--- a/target/arm/mve.decode
+++ b/target/arm/mve.decode
@@ -571,6 +571,7 @@ VCMPGT 1111 1110 0 . .. ... 1 ... 1 1111 0 0 . 0
... 1 @vcmp
VCMPLE 1111 1110 0 . .. ... 1 ... 1 1111 1 0 . 0 ... 1 @vcmp
{
+ VPNOT 1111 1110 0 0 11 000 1 000 0 1111 0100 1101
VPST 1111 1110 0 . 11 000 1 ... 0 1111 0100 1101 mask=%mask_22_13
VCMPEQ_scalar 1111 1110 0 . .. ... 1 ... 0 1111 0 1 0 0 .... @vcmp_scalar
}
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
index d326205cbf0..c22a00c5ed6 100644
--- a/target/arm/mve_helper.c
+++ b/target/arm/mve_helper.c
@@ -2201,6 +2201,23 @@ void HELPER(mve_vpsel)(CPUARMState *env, void *vd, void
*vn, void *vm)
mve_advance_vpt(env);
}
+void HELPER(mve_vpnot)(CPUARMState *env)
+{
+ /*
+ * P0 bits for unexecuted beats (where eci_mask is 0) are unchanged.
+ * P0 bits for predicated lanes in executed bits (where mask is 0) are 0.
+ * P0 bits otherwise are inverted.
+ * (This is the same logic as VCMP.)
+ * This insn is itself subject to predication and to beat-wise execution,
+ * and after it executes VPT state advances in the usual way.
+ */
+ uint16_t mask = mve_element_mask(env);
+ uint16_t eci_mask = mve_eci_mask(env);
+ uint16_t beatpred = ~env->v7m.vpr & mask;
+ env->v7m.vpr = (env->v7m.vpr & ~(uint32_t)eci_mask) | (beatpred &
eci_mask);
+ mve_advance_vpt(env);
+}
+
#define DO_1OP_SAT(OP, ESIZE, TYPE, FN) \
void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm) \
{ \
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
index 93707fdd681..cc2e58cfe2f 100644
--- a/target/arm/translate-mve.c
+++ b/target/arm/translate-mve.c
@@ -887,6 +887,25 @@ static bool trans_VPST(DisasContext *s, arg_VPST *a)
return true;
}
+static bool trans_VPNOT(DisasContext *s, arg_VPNOT *a)
+{
+ /*
+ * Invert the predicate in VPR.P0. We have call out to
+ * a helper because this insn itself is beatwise and can
+ * be predicated.
+ */
+ if (!dc_isar_feature(aa32_mve, s)) {
+ return false;
+ }
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
+ return true;
+ }
+
+ gen_helper_mve_vpnot(cpu_env);
+ mve_update_eci(s);
+ return true;
+}
+
static bool trans_VADDV(DisasContext *s, arg_VADDV *a)
{
/* VADDV: vector add across vector */
--
2.20.1
- [PULL 19/44] target/arm: Implement MVE shift-by-scalar, (continued)
- [PULL 19/44] target/arm: Implement MVE shift-by-scalar, Peter Maydell, 2021/08/25
- [PULL 18/44] target/arm: Implement MVE VMLAS, Peter Maydell, 2021/08/25
- [PULL 20/44] target/arm: Move 'x' and 'a' bit definitions into vmlaldav formats, Peter Maydell, 2021/08/25
- [PULL 22/44] target/arm: Implement MVE VABAV, Peter Maydell, 2021/08/25
- [PULL 17/44] target/arm: Implement MVE VPSEL, Peter Maydell, 2021/08/25
- [PULL 26/44] target/arm: Implement MVE VMLA, Peter Maydell, 2021/08/25
- [PULL 21/44] target/arm: Implement MVE integer min/max across vector, Peter Maydell, 2021/08/25
- [PULL 23/44] target/arm: Implement MVE narrowing moves, Peter Maydell, 2021/08/25
- [PULL 25/44] target/arm: Implement MVE VMLADAV and VMLSLDAV, Peter Maydell, 2021/08/25
- [PULL 27/44] target/arm: Implement MVE saturating doubling multiply accumulates, Peter Maydell, 2021/08/25
- [PULL 31/44] target/arm: Implement MVE VPNOT,
Peter Maydell <=
- [PULL 29/44] target/arm: Implement MVE VMAXA, VMINA, Peter Maydell, 2021/08/25
- [PULL 30/44] target/arm: Implement MVE VMOV to/from 2 general-purpose registers, Peter Maydell, 2021/08/25
- [PULL 28/44] target/arm: Implement MVE VQABS, VQNEG, Peter Maydell, 2021/08/25
- [PULL 33/44] target/arm: Implement MVE scatter-gather insns, Peter Maydell, 2021/08/25
- [PULL 36/44] target/arm: Re-indent sdiv and udiv helpers, Peter Maydell, 2021/08/25
- [PULL 34/44] target/arm: Implement MVE scatter-gather immediate forms, Peter Maydell, 2021/08/25
- [PULL 24/44] target/arm: Rename MVEGenDualAccOpFn to MVEGenLongDualAccOpFn, Peter Maydell, 2021/08/25
- [PULL 32/44] target/arm: Implement MVE VCTP, Peter Maydell, 2021/08/25
- [PULL 35/44] target/arm: Implement MVE interleaving loads/stores, Peter Maydell, 2021/08/25
- [PULL 38/44] target/arm: kvm: use RCU_READ_LOCK_GUARD() in kvm_arch_fixup_msi_route(), Peter Maydell, 2021/08/25