[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Qemu-ppc] [RFC PATCH v2 4/9] target/ppc: delay writeback of avr{l, h} d
From: |
Mark Cave-Ayland |
Subject: |
[Qemu-ppc] [RFC PATCH v2 4/9] target/ppc: delay writeback of avr{l, h} during lvx instruction |
Date: |
Mon, 17 Dec 2018 12:24:00 +0000 |
During review of the previous patch, Richard pointed out an existing bug that
the writeback to the avr{l,h} registers should be delayed until after any
exceptions have been raised.
Perform both 64-bit loads into separate temporaries and then write them into
the avr{l,h} registers together to ensure that this is always the case.
Signed-off-by: Mark Cave-Ayland <address@hidden>
---
target/ppc/translate/vmx-impl.inc.c | 24 +++++++++++++-----------
1 file changed, 13 insertions(+), 11 deletions(-)
diff --git a/target/ppc/translate/vmx-impl.inc.c
b/target/ppc/translate/vmx-impl.inc.c
index 30046c6e31..cd7d12265c 100644
--- a/target/ppc/translate/vmx-impl.inc.c
+++ b/target/ppc/translate/vmx-impl.inc.c
@@ -18,33 +18,35 @@ static inline TCGv_ptr gen_avr_ptr(int reg)
static void glue(gen_, name)(DisasContext *ctx)
\
{ \
TCGv EA; \
- TCGv_i64 avr; \
+ TCGv_i64 avr1, avr2; \
if (unlikely(!ctx->altivec_enabled)) { \
gen_exception(ctx, POWERPC_EXCP_VPU); \
return; \
} \
gen_set_access_type(ctx, ACCESS_INT); \
- avr = tcg_temp_new_i64(); \
+ avr1 = tcg_temp_new_i64(); \
+ avr2 = tcg_temp_new_i64(); \
EA = tcg_temp_new(); \
gen_addr_reg_index(ctx, EA); \
tcg_gen_andi_tl(EA, EA, ~0xf); \
/* We only need to swap high and low halves. gen_qemu_ld64_i64 does \
necessary 64-bit byteswap already. */ \
if (ctx->le_mode) { \
- gen_qemu_ld64_i64(ctx, avr, EA); \
- set_avr64(rD(ctx->opcode), avr, false); \
+ gen_qemu_ld64_i64(ctx, avr1, EA); \
tcg_gen_addi_tl(EA, EA, 8); \
- gen_qemu_ld64_i64(ctx, avr, EA); \
- set_avr64(rD(ctx->opcode), avr, true); \
+ gen_qemu_ld64_i64(ctx, avr2, EA); \
+ set_avr64(rD(ctx->opcode), avr1, false); \
+ set_avr64(rD(ctx->opcode), avr2, true); \
} else { \
- gen_qemu_ld64_i64(ctx, avr, EA); \
- set_avr64(rD(ctx->opcode), avr, true); \
+ gen_qemu_ld64_i64(ctx, avr1, EA); \
tcg_gen_addi_tl(EA, EA, 8); \
- gen_qemu_ld64_i64(ctx, avr, EA); \
- set_avr64(rD(ctx->opcode), avr, false); \
+ gen_qemu_ld64_i64(ctx, avr2, EA); \
+ set_avr64(rD(ctx->opcode), avr1, true); \
+ set_avr64(rD(ctx->opcode), avr2, false); \
} \
tcg_temp_free(EA); \
- tcg_temp_free_i64(avr); \
+ tcg_temp_free_i64(avr1); \
+ tcg_temp_free_i64(avr2); \
}
#define GEN_VR_STX(name, opc2, opc3) \
--
2.11.0
- [Qemu-ppc] [RFC PATCH v2 0/9] target/ppc: convert VMX instructions to use TCG vector operations, Mark Cave-Ayland, 2018/12/17
- [Qemu-ppc] [RFC PATCH v2 2/9] target/ppc: introduce get_avr64() and set_avr64() helpers for VMX register access, Mark Cave-Ayland, 2018/12/17
- [Qemu-ppc] [RFC PATCH v2 6/9] target/ppc: merge ppc_vsr_t and ppc_avr_t union types, Mark Cave-Ayland, 2018/12/17
- [Qemu-ppc] [RFC PATCH v2 4/9] target/ppc: delay writeback of avr{l, h} during lvx instruction,
Mark Cave-Ayland <=
- [Qemu-ppc] [RFC PATCH v2 5/9] target/ppc: switch FPR, VMX and VSX helpers to access data directly from cpu_env, Mark Cave-Ayland, 2018/12/17
- [Qemu-ppc] [RFC PATCH v2 1/9] target/ppc: introduce get_fpr() and set_fpr() helpers for FP register access, Mark Cave-Ayland, 2018/12/17
- [Qemu-ppc] [RFC PATCH v2 3/9] target/ppc: introduce get_cpu_vsr{l, h}() and set_cpu_vsr{l, h}() helpers for VSR register access, Mark Cave-Ayland, 2018/12/17
- [Qemu-ppc] [RFC PATCH v2 9/9] target/ppc: convert vaddu[b, h, w, d] and vsubu[b, h, w, d] over to use vector operations, Mark Cave-Ayland, 2018/12/17
- [Qemu-ppc] [RFC PATCH v2 7/9] target/ppc: move FP and VMX registers into aligned vsr register array, Mark Cave-Ayland, 2018/12/17