[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [RFC PATCH 21/34] arm: Rename all exceptions
From: |
Peter Crosthwaite |
Subject: |
Re: [Qemu-devel] [RFC PATCH 21/34] arm: Rename all exceptions |
Date: |
Thu, 14 May 2015 22:43:04 -0700 |
On Sun, May 10, 2015 at 11:29 PM, Peter Crosthwaite
<address@hidden> wrote:
> These are architecture specific, and via cpu.h visibile in common
> and global namespaces. Preface them with "ARMAR_" to avoid namespace
> collisions. Prepares support for multi-arch where multiple cpu.h's
> can be included by device land code and namespace issues happen with
> such generic names.
>
> Use prefix ARM"AR" as the trap table is separate from the M-profile
> support, so qualify with AR to make it specific to A/R profile.
>
So I am not exactly sure what to do here going forward. This is going
to get messy with all the other arches. There are alternatives:
1: Split these arch-specific private defs to a new header. internals.h
or a new header. which every way we go though the header needs to be
exported to linux-user code (awkward).
2: Purge all device-land uses of cpu.h. They should be able to use
cpu-qom.h and the random bits of machine-model code reaching into the
env or strobing interrupts needs to be fixed.
3: This patch or something like it.
Regards,
Peter
> Signed-off-by: Peter Crosthwaite <address@hidden>
> ---
> linux-user/main.c | 28 +++++++++++------------
> target-arm/cpu.c | 20 ++++++++---------
> target-arm/cpu.h | 38 +++++++++++++++----------------
> target-arm/helper-a64.c | 24 ++++++++++----------
> target-arm/helper.c | 56
> +++++++++++++++++++++++-----------------------
> target-arm/internals.h | 36 ++++++++++++++---------------
> target-arm/op_helper.c | 20 ++++++++---------
> target-arm/psci.c | 4 ++--
> target-arm/translate-a64.c | 18 +++++++--------
> target-arm/translate.c | 44 ++++++++++++++++++------------------
> 10 files changed, 144 insertions(+), 144 deletions(-)
>
> diff --git a/linux-user/main.c b/linux-user/main.c
> index 60b5a5f..50fbd7e 100644
> --- a/linux-user/main.c
> +++ b/linux-user/main.c
> @@ -681,7 +681,7 @@ void cpu_loop(CPUARMState *env)
> trapnr = cpu_arm_exec(env);
> cpu_exec_end(cs);
> switch(trapnr) {
> - case EXCP_UDEF:
> + case ARMAR_EXCP_UDEF:
> {
> TaskState *ts = cs->opaque;
> uint32_t opcode;
> @@ -752,12 +752,12 @@ void cpu_loop(CPUARMState *env)
> }
> }
> break;
> - case EXCP_SWI:
> - case EXCP_BKPT:
> + case ARMAR_EXCP_SWI:
> + case ARMAR_EXCP_BKPT:
> {
> env->eabi = 1;
> /* system call */
> - if (trapnr == EXCP_BKPT) {
> + if (trapnr == ARMAR_EXCP_BKPT) {
> if (env->thumb) {
> /* FIXME - what to do if get_user() fails? */
> get_user_code_u16(insn, env->regs[15],
> env->bswap_code);
> @@ -833,13 +833,13 @@ void cpu_loop(CPUARMState *env)
> case EXCP_INTERRUPT:
> /* just indicate that signals should be handled asap */
> break;
> - case EXCP_STREX:
> + case ARMAR_EXCP_STREX:
> if (!do_strex(env)) {
> break;
> }
> /* fall through for segv */
> - case EXCP_PREFETCH_ABORT:
> - case EXCP_DATA_ABORT:
> + case ARMAR_EXCP_PREFETCH_ABORT:
> + case ARMAR_EXCP_DATA_ABORT:
> addr = env->exception.vaddress;
> {
> info.si_signo = TARGET_SIGSEGV;
> @@ -865,7 +865,7 @@ void cpu_loop(CPUARMState *env)
> }
> }
> break;
> - case EXCP_KERNEL_TRAP:
> + case ARMAR_EXCP_KERNEL_TRAP:
> if (do_kernel_trap(env))
> goto error;
> break;
> @@ -1013,7 +1013,7 @@ void cpu_loop(CPUARMState *env)
> cpu_exec_end(cs);
>
> switch (trapnr) {
> - case EXCP_SWI:
> + case ARMAR_EXCP_SWI:
> env->xregs[0] = do_syscall(env,
> env->xregs[8],
> env->xregs[0],
> @@ -1027,20 +1027,20 @@ void cpu_loop(CPUARMState *env)
> case EXCP_INTERRUPT:
> /* just indicate that signals should be handled asap */
> break;
> - case EXCP_UDEF:
> + case ARMAR_EXCP_UDEF:
> info.si_signo = TARGET_SIGILL;
> info.si_errno = 0;
> info.si_code = TARGET_ILL_ILLOPN;
> info._sifields._sigfault._addr = env->pc;
> queue_signal(env, info.si_signo, &info);
> break;
> - case EXCP_STREX:
> + case ARMAR_EXCP_STREX:
> if (!do_strex_a64(env)) {
> break;
> }
> /* fall through for segv */
> - case EXCP_PREFETCH_ABORT:
> - case EXCP_DATA_ABORT:
> + case ARMAR_EXCP_PREFETCH_ABORT:
> + case ARMAR_EXCP_DATA_ABORT:
> info.si_signo = TARGET_SIGSEGV;
> info.si_errno = 0;
> /* XXX: check env->error_code */
> @@ -1049,7 +1049,7 @@ void cpu_loop(CPUARMState *env)
> queue_signal(env, info.si_signo, &info);
> break;
> case EXCP_DEBUG:
> - case EXCP_BKPT:
> + case ARMAR_EXCP_BKPT:
> sig = gdb_handlesig(cs, TARGET_SIGTRAP);
> if (sig) {
> info.si_signo = sig;
> diff --git a/target-arm/cpu.c b/target-arm/cpu.c
> index cfa761a..566deb9 100644
> --- a/target-arm/cpu.c
> +++ b/target-arm/cpu.c
> @@ -209,26 +209,26 @@ bool arm_cpu_exec_interrupt(CPUState *cs, int
> interrupt_request)
> bool ret = false;
>
> if (interrupt_request & CPU_INTERRUPT_FIQ
> - && arm_excp_unmasked(cs, EXCP_FIQ)) {
> - cs->exception_index = EXCP_FIQ;
> + && arm_excp_unmasked(cs, ARMAR_EXCP_FIQ)) {
> + cs->exception_index = ARMAR_EXCP_FIQ;
> cc->do_interrupt(cs);
> ret = true;
> }
> if (interrupt_request & CPU_INTERRUPT_HARD
> - && arm_excp_unmasked(cs, EXCP_IRQ)) {
> - cs->exception_index = EXCP_IRQ;
> + && arm_excp_unmasked(cs, ARMAR_EXCP_IRQ)) {
> + cs->exception_index = ARMAR_EXCP_IRQ;
> cc->do_interrupt(cs);
> ret = true;
> }
> if (interrupt_request & CPU_INTERRUPT_VIRQ
> - && arm_excp_unmasked(cs, EXCP_VIRQ)) {
> - cs->exception_index = EXCP_VIRQ;
> + && arm_excp_unmasked(cs, ARMAR_EXCP_VIRQ)) {
> + cs->exception_index = ARMAR_EXCP_VIRQ;
> cc->do_interrupt(cs);
> ret = true;
> }
> if (interrupt_request & CPU_INTERRUPT_VFIQ
> - && arm_excp_unmasked(cs, EXCP_VFIQ)) {
> - cs->exception_index = EXCP_VFIQ;
> + && arm_excp_unmasked(cs, ARMAR_EXCP_VFIQ)) {
> + cs->exception_index = ARMAR_EXCP_VFIQ;
> cc->do_interrupt(cs);
> ret = true;
> }
> @@ -247,7 +247,7 @@ static bool arm_v7m_cpu_exec_interrupt(CPUState *cs, int
> interrupt_request)
>
> if (interrupt_request & CPU_INTERRUPT_FIQ
> && !(env->daif & PSTATE_F)) {
> - cs->exception_index = EXCP_FIQ;
> + cs->exception_index = ARMAR_EXCP_FIQ;
> cc->do_interrupt(cs);
> ret = true;
> }
> @@ -264,7 +264,7 @@ static bool arm_v7m_cpu_exec_interrupt(CPUState *cs, int
> interrupt_request)
> if (interrupt_request & CPU_INTERRUPT_HARD
> && !(env->daif & PSTATE_I)
> && (env->regs[15] < 0xfffffff0)) {
> - cs->exception_index = EXCP_IRQ;
> + cs->exception_index = ARMAR_EXCP_IRQ;
> cc->do_interrupt(cs);
> ret = true;
> }
> diff --git a/target-arm/cpu.h b/target-arm/cpu.h
> index d4a5899..7d08301 100644
> --- a/target-arm/cpu.h
> +++ b/target-arm/cpu.h
> @@ -41,21 +41,21 @@
>
> #include "fpu/softfloat.h"
>
> -#define EXCP_UDEF 1 /* undefined instruction */
> -#define EXCP_SWI 2 /* software interrupt */
> -#define EXCP_PREFETCH_ABORT 3
> -#define EXCP_DATA_ABORT 4
> -#define EXCP_IRQ 5
> -#define EXCP_FIQ 6
> -#define EXCP_BKPT 7
> -#define EXCP_EXCEPTION_EXIT 8 /* Return from v7M exception. */
> -#define EXCP_KERNEL_TRAP 9 /* Jumped to kernel code page. */
> -#define EXCP_STREX 10
> -#define EXCP_HVC 11 /* HyperVisor Call */
> -#define EXCP_HYP_TRAP 12
> -#define EXCP_SMC 13 /* Secure Monitor Call */
> -#define EXCP_VIRQ 14
> -#define EXCP_VFIQ 15
> +#define ARMAR_EXCP_UDEF 1 /* undefined instruction */
> +#define ARMAR_EXCP_SWI 2 /* software interrupt */
> +#define ARMAR_EXCP_PREFETCH_ABORT 3
> +#define ARMAR_EXCP_DATA_ABORT 4
> +#define ARMAR_EXCP_IRQ 5
> +#define ARMAR_EXCP_FIQ 6
> +#define ARMAR_EXCP_BKPT 7
> +#define ARMAR_EXCP_EXCEPTION_EXIT 8 /* Return from v7M exception. */
> +#define ARMAR_EXCP_KERNEL_TRAP 9 /* Jumped to kernel code page. */
> +#define ARMAR_EXCP_STREX 10
> +#define ARMAR_EXCP_HVC 11 /* HyperVisor Call */
> +#define ARMAR_EXCP_HYP_TRAP 12
> +#define ARMAR_EXCP_SMC 13 /* Secure Monitor Call */
> +#define ARMAR_EXCP_VIRQ 14
> +#define ARMAR_EXCP_VFIQ 15
>
> #define ARMV7M_EXCP_RESET 1
> #define ARMV7M_EXCP_NMI 2
> @@ -1503,7 +1503,7 @@ static inline bool arm_excp_unmasked(CPUState *cs,
> unsigned int excp_idx)
> }
>
> switch (excp_idx) {
> - case EXCP_FIQ:
> + case ARMAR_EXCP_FIQ:
> /* If FIQs are routed to EL3 or EL2 then there are cases where we
> * override the CPSR.F in determining if the exception is masked or
> * not. If neither of these are set then we fall back to the CPSR.F
> @@ -1521,7 +1521,7 @@ static inline bool arm_excp_unmasked(CPUState *cs,
> unsigned int excp_idx)
> pstate_unmasked = !(env->daif & PSTATE_F);
> break;
>
> - case EXCP_IRQ:
> + case ARMAR_EXCP_IRQ:
> /* When EL3 execution state is 32-bit, if HCR.IMO is set then we may
> * override the CPSR.I masking when in non-secure state. The SCR.IRQ
> * setting has already been taken into consideration when setting the
> @@ -1532,13 +1532,13 @@ static inline bool arm_excp_unmasked(CPUState *cs,
> unsigned int excp_idx)
> pstate_unmasked = !(env->daif & PSTATE_I);
> break;
>
> - case EXCP_VFIQ:
> + case ARMAR_EXCP_VFIQ:
> if (secure || !(env->cp15.hcr_el2 & HCR_FMO)) {
> /* VFIQs are only taken when hypervized and non-secure. */
> return false;
> }
> return !(env->daif & PSTATE_F);
> - case EXCP_VIRQ:
> + case ARMAR_EXCP_VIRQ:
> if (secure || !(env->cp15.hcr_el2 & HCR_IMO)) {
> /* VIRQs are only taken when hypervized and non-secure. */
> return false;
> diff --git a/target-arm/helper-a64.c b/target-arm/helper-a64.c
> index 861f6fa..d8869b3 100644
> --- a/target-arm/helper-a64.c
> +++ b/target-arm/helper-a64.c
> @@ -492,26 +492,26 @@ void aarch64_cpu_do_interrupt(CPUState *cs)
> }
>
> switch (cs->exception_index) {
> - case EXCP_PREFETCH_ABORT:
> - case EXCP_DATA_ABORT:
> + case ARMAR_EXCP_PREFETCH_ABORT:
> + case ARMAR_EXCP_DATA_ABORT:
> env->cp15.far_el[new_el] = env->exception.vaddress;
> qemu_log_mask(CPU_LOG_INT, "...with FAR 0x%" PRIx64 "\n",
> env->cp15.far_el[new_el]);
> /* fall through */
> - case EXCP_BKPT:
> - case EXCP_UDEF:
> - case EXCP_SWI:
> - case EXCP_HVC:
> - case EXCP_HYP_TRAP:
> - case EXCP_SMC:
> + case ARMAR_EXCP_BKPT:
> + case ARMAR_EXCP_UDEF:
> + case ARMAR_EXCP_SWI:
> + case ARMAR_EXCP_HVC:
> + case ARMAR_EXCP_HYP_TRAP:
> + case ARMAR_EXCP_SMC:
> env->cp15.esr_el[new_el] = env->exception.syndrome;
> break;
> - case EXCP_IRQ:
> - case EXCP_VIRQ:
> + case ARMAR_EXCP_IRQ:
> + case ARMAR_EXCP_VIRQ:
> addr += 0x80;
> break;
> - case EXCP_FIQ:
> - case EXCP_VFIQ:
> + case ARMAR_EXCP_FIQ:
> + case ARMAR_EXCP_VFIQ:
> addr += 0x100;
> break;
> default:
> diff --git a/target-arm/helper.c b/target-arm/helper.c
> index f8f8d76..b1ff438 100644
> --- a/target-arm/helper.c
> +++ b/target-arm/helper.c
> @@ -4053,9 +4053,9 @@ int arm_cpu_handle_mmu_fault(CPUState *cs, vaddr
> address, int rw,
>
> env->exception.vaddress = address;
> if (rw == 2) {
> - cs->exception_index = EXCP_PREFETCH_ABORT;
> + cs->exception_index = ARMAR_EXCP_PREFETCH_ABORT;
> } else {
> - cs->exception_index = EXCP_DATA_ABORT;
> + cs->exception_index = ARMAR_EXCP_DATA_ABORT;
> }
> return 1;
> }
> @@ -4235,11 +4235,11 @@ static inline uint32_t
> arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
> int is64 = arm_el_is_aa64(env, 3);
>
> switch (excp_idx) {
> - case EXCP_IRQ:
> + case ARMAR_EXCP_IRQ:
> scr = ((env->cp15.scr_el3 & SCR_IRQ) == SCR_IRQ);
> hcr = ((env->cp15.hcr_el2 & HCR_IMO) == HCR_IMO);
> break;
> - case EXCP_FIQ:
> + case ARMAR_EXCP_FIQ:
> scr = ((env->cp15.scr_el3 & SCR_FIQ) == SCR_FIQ);
> hcr = ((env->cp15.hcr_el2 & HCR_FMO) == HCR_FMO);
> break;
> @@ -4272,19 +4272,19 @@ unsigned int arm_excp_target_el(CPUState *cs,
> unsigned int excp_idx)
> bool secure = arm_is_secure(env);
>
> switch (excp_idx) {
> - case EXCP_HVC:
> - case EXCP_HYP_TRAP:
> + case ARMAR_EXCP_HVC:
> + case ARMAR_EXCP_HYP_TRAP:
> target_el = 2;
> break;
> - case EXCP_SMC:
> + case ARMAR_EXCP_SMC:
> target_el = 3;
> break;
> - case EXCP_FIQ:
> - case EXCP_IRQ:
> + case ARMAR_EXCP_FIQ:
> + case ARMAR_EXCP_IRQ:
> target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure);
> break;
> - case EXCP_VIRQ:
> - case EXCP_VFIQ:
> + case ARMAR_EXCP_VIRQ:
> + case ARMAR_EXCP_VFIQ:
> target_el = 1;
> break;
> default:
> @@ -4386,21 +4386,21 @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
> /* TODO: Need to escalate if the current priority is higher than the
> one we're raising. */
> switch (cs->exception_index) {
> - case EXCP_UDEF:
> + case ARMAR_EXCP_UDEF:
> armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE);
> return;
> - case EXCP_SWI:
> + case ARMAR_EXCP_SWI:
> /* The PC already points to the next instruction. */
> armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SVC);
> return;
> - case EXCP_PREFETCH_ABORT:
> - case EXCP_DATA_ABORT:
> + case ARMAR_EXCP_PREFETCH_ABORT:
> + case ARMAR_EXCP_DATA_ABORT:
> /* TODO: if we implemented the MPU registers, this is where we
> * should set the MMFAR, etc from exception.fsr and
> exception.vaddress.
> */
> armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM);
> return;
> - case EXCP_BKPT:
> + case ARMAR_EXCP_BKPT:
> if (semihosting_enabled) {
> int nr;
> nr = arm_lduw_code(env, env->regs[15], env->bswap_code) & 0xff;
> @@ -4413,10 +4413,10 @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
> }
> armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_DEBUG);
> return;
> - case EXCP_IRQ:
> + case ARMAR_EXCP_IRQ:
> env->v7m.exception = armv7m_nvic_acknowledge_irq(env->nvic);
> break;
> - case EXCP_EXCEPTION_EXIT:
> + case ARMAR_EXCP_EXCEPTION_EXIT:
> do_v7m_exception_exit(env);
> return;
> default:
> @@ -4703,7 +4703,7 @@ void arm_cpu_do_interrupt(CPUState *cs)
>
> /* TODO: Vectored interrupt controller. */
> switch (cs->exception_index) {
> - case EXCP_UDEF:
> + case ARMAR_EXCP_UDEF:
> new_mode = ARM_CPU_MODE_UND;
> addr = 0x04;
> mask = CPSR_I;
> @@ -4712,7 +4712,7 @@ void arm_cpu_do_interrupt(CPUState *cs)
> else
> offset = 4;
> break;
> - case EXCP_SWI:
> + case ARMAR_EXCP_SWI:
> if (semihosting_enabled) {
> /* Check for semihosting interrupt. */
> if (env->thumb) {
> @@ -4738,7 +4738,7 @@ void arm_cpu_do_interrupt(CPUState *cs)
> /* The PC already points to the next instruction. */
> offset = 0;
> break;
> - case EXCP_BKPT:
> + case ARMAR_EXCP_BKPT:
> /* See if this is a semihosting syscall. */
> if (env->thumb && semihosting_enabled) {
> mask = arm_lduw_code(env, env->regs[15], env->bswap_code) & 0xff;
> @@ -4752,7 +4752,7 @@ void arm_cpu_do_interrupt(CPUState *cs)
> }
> env->exception.fsr = 2;
> /* Fall through to prefetch abort. */
> - case EXCP_PREFETCH_ABORT:
> + case ARMAR_EXCP_PREFETCH_ABORT:
> A32_BANKED_CURRENT_REG_SET(env, ifsr, env->exception.fsr);
> A32_BANKED_CURRENT_REG_SET(env, ifar, env->exception.vaddress);
> qemu_log_mask(CPU_LOG_INT, "...with IFSR 0x%x IFAR 0x%x\n",
> @@ -4762,7 +4762,7 @@ void arm_cpu_do_interrupt(CPUState *cs)
> mask = CPSR_A | CPSR_I;
> offset = 4;
> break;
> - case EXCP_DATA_ABORT:
> + case ARMAR_EXCP_DATA_ABORT:
> A32_BANKED_CURRENT_REG_SET(env, dfsr, env->exception.fsr);
> A32_BANKED_CURRENT_REG_SET(env, dfar, env->exception.vaddress);
> qemu_log_mask(CPU_LOG_INT, "...with DFSR 0x%x DFAR 0x%x\n",
> @@ -4773,7 +4773,7 @@ void arm_cpu_do_interrupt(CPUState *cs)
> mask = CPSR_A | CPSR_I;
> offset = 8;
> break;
> - case EXCP_IRQ:
> + case ARMAR_EXCP_IRQ:
> new_mode = ARM_CPU_MODE_IRQ;
> addr = 0x18;
> /* Disable IRQ and imprecise data aborts. */
> @@ -4785,7 +4785,7 @@ void arm_cpu_do_interrupt(CPUState *cs)
> mask |= CPSR_F;
> }
> break;
> - case EXCP_FIQ:
> + case ARMAR_EXCP_FIQ:
> new_mode = ARM_CPU_MODE_FIQ;
> addr = 0x1c;
> /* Disable FIQ, IRQ and imprecise data aborts. */
> @@ -4796,7 +4796,7 @@ void arm_cpu_do_interrupt(CPUState *cs)
> }
> offset = 4;
> break;
> - case EXCP_SMC:
> + case ARMAR_EXCP_SMC:
> new_mode = ARM_CPU_MODE_MON;
> addr = 0x08;
> mask = CPSR_A | CPSR_I | CPSR_F;
> @@ -5823,13 +5823,13 @@ int arm_cpu_handle_mmu_fault(CPUState *cs, vaddr
> address,
> */
> if (access_type == 2) {
> syn = syn_insn_abort(same_el, 0, 0, syn);
> - cs->exception_index = EXCP_PREFETCH_ABORT;
> + cs->exception_index = ARMAR_EXCP_PREFETCH_ABORT;
> } else {
> syn = syn_data_abort(same_el, 0, 0, 0, access_type == 1, syn);
> if (access_type == 1 && arm_feature(env, ARM_FEATURE_V6)) {
> ret |= (1 << 11);
> }
> - cs->exception_index = EXCP_DATA_ABORT;
> + cs->exception_index = ARMAR_EXCP_DATA_ABORT;
> }
>
> env->exception.syndrome = syn;
> diff --git a/target-arm/internals.h b/target-arm/internals.h
> index 2cc3017..8a6d4d4 100644
> --- a/target-arm/internals.h
> +++ b/target-arm/internals.h
> @@ -34,30 +34,30 @@ static inline bool excp_is_internal(int excp)
> || excp == EXCP_HLT
> || excp == EXCP_DEBUG
> || excp == EXCP_HALTED
> - || excp == EXCP_EXCEPTION_EXIT
> - || excp == EXCP_KERNEL_TRAP
> - || excp == EXCP_STREX;
> + || excp == ARMAR_EXCP_EXCEPTION_EXIT
> + || excp == ARMAR_EXCP_KERNEL_TRAP
> + || excp == ARMAR_EXCP_STREX;
> }
>
> /* Exception names for debug logging; note that not all of these
> * precisely correspond to architectural exceptions.
> */
> static const char * const excnames[] = {
> - [EXCP_UDEF] = "Undefined Instruction",
> - [EXCP_SWI] = "SVC",
> - [EXCP_PREFETCH_ABORT] = "Prefetch Abort",
> - [EXCP_DATA_ABORT] = "Data Abort",
> - [EXCP_IRQ] = "IRQ",
> - [EXCP_FIQ] = "FIQ",
> - [EXCP_BKPT] = "Breakpoint",
> - [EXCP_EXCEPTION_EXIT] = "QEMU v7M exception exit",
> - [EXCP_KERNEL_TRAP] = "QEMU intercept of kernel commpage",
> - [EXCP_STREX] = "QEMU intercept of STREX",
> - [EXCP_HVC] = "Hypervisor Call",
> - [EXCP_HYP_TRAP] = "Hypervisor Trap",
> - [EXCP_SMC] = "Secure Monitor Call",
> - [EXCP_VIRQ] = "Virtual IRQ",
> - [EXCP_VFIQ] = "Virtual FIQ",
> + [ARMAR_EXCP_UDEF] = "Undefined Instruction",
> + [ARMAR_EXCP_SWI] = "SVC",
> + [ARMAR_EXCP_PREFETCH_ABORT] = "Prefetch Abort",
> + [ARMAR_EXCP_DATA_ABORT] = "Data Abort",
> + [ARMAR_EXCP_IRQ] = "IRQ",
> + [ARMAR_EXCP_FIQ] = "FIQ",
> + [ARMAR_EXCP_BKPT] = "Breakpoint",
> + [ARMAR_EXCP_EXCEPTION_EXIT] = "QEMU v7M exception exit",
> + [ARMAR_EXCP_KERNEL_TRAP] = "QEMU intercept of kernel commpage",
> + [ARMAR_EXCP_STREX] = "QEMU intercept of STREX",
> + [ARMAR_EXCP_HVC] = "Hypervisor Call",
> + [ARMAR_EXCP_HYP_TRAP] = "Hypervisor Trap",
> + [ARMAR_EXCP_SMC] = "Secure Monitor Call",
> + [ARMAR_EXCP_VIRQ] = "Virtual IRQ",
> + [ARMAR_EXCP_VFIQ] = "Virtual FIQ",
> };
>
> static inline void arm_log_exception(int idx)
> diff --git a/target-arm/op_helper.c b/target-arm/op_helper.c
> index 3df9c57..1893753 100644
> --- a/target-arm/op_helper.c
> +++ b/target-arm/op_helper.c
> @@ -305,7 +305,7 @@ void HELPER(access_check_cp_reg)(CPUARMState *env, void
> *rip, uint32_t syndrome)
> if (arm_feature(env, ARM_FEATURE_XSCALE) && ri->cp < 14
> && extract32(env->cp15.c15_cpar, ri->cp, 1) == 0) {
> env->exception.syndrome = syndrome;
> - raise_exception(env, EXCP_UDEF);
> + raise_exception(env, ARMAR_EXCP_UDEF);
> }
>
> if (!ri->accessfn) {
> @@ -324,7 +324,7 @@ void HELPER(access_check_cp_reg)(CPUARMState *env, void
> *rip, uint32_t syndrome)
> default:
> g_assert_not_reached();
> }
> - raise_exception(env, EXCP_UDEF);
> + raise_exception(env, ARMAR_EXCP_UDEF);
> }
>
> void HELPER(set_cp_reg)(CPUARMState *env, void *rip, uint32_t value)
> @@ -362,7 +362,7 @@ void HELPER(msr_i_pstate)(CPUARMState *env, uint32_t op,
> uint32_t imm)
> * to catch that case at translate time.
> */
> if (arm_current_el(env) == 0 && !(env->cp15.sctlr_el[1] & SCTLR_UMA)) {
> - raise_exception(env, EXCP_UDEF);
> + raise_exception(env, ARMAR_EXCP_UDEF);
> }
>
> switch (op) {
> @@ -393,7 +393,7 @@ void HELPER(pre_hvc)(CPUARMState *env)
> bool secure = false;
> bool undef;
>
> - if (arm_is_psci_call(cpu, EXCP_HVC)) {
> + if (arm_is_psci_call(cpu, ARMAR_EXCP_HVC)) {
> /* If PSCI is enabled and this looks like a valid PSCI call then
> * that overrides the architecturally mandated HVC behaviour.
> */
> @@ -421,7 +421,7 @@ void HELPER(pre_hvc)(CPUARMState *env)
>
> if (undef) {
> env->exception.syndrome = syn_uncategorized();
> - raise_exception(env, EXCP_UDEF);
> + raise_exception(env, ARMAR_EXCP_UDEF);
> }
> }
>
> @@ -438,7 +438,7 @@ void HELPER(pre_smc)(CPUARMState *env, uint32_t syndrome)
> */
> bool undef = is_a64(env) ? smd : (!secure && smd);
>
> - if (arm_is_psci_call(cpu, EXCP_SMC)) {
> + if (arm_is_psci_call(cpu, ARMAR_EXCP_SMC)) {
> /* If PSCI is enabled and this looks like a valid PSCI call then
> * that overrides the architecturally mandated SMC behaviour.
> */
> @@ -451,12 +451,12 @@ void HELPER(pre_smc)(CPUARMState *env, uint32_t
> syndrome)
> } else if (!secure && cur_el == 1 && (env->cp15.hcr_el2 & HCR_TSC)) {
> /* In NS EL1, HCR controlled routing to EL2 has priority over SMD. */
> env->exception.syndrome = syndrome;
> - raise_exception(env, EXCP_HYP_TRAP);
> + raise_exception(env, ARMAR_EXCP_HYP_TRAP);
> }
>
> if (undef) {
> env->exception.syndrome = syn_uncategorized();
> - raise_exception(env, EXCP_UDEF);
> + raise_exception(env, ARMAR_EXCP_UDEF);
> }
> }
>
> @@ -756,7 +756,7 @@ void arm_debug_excp_handler(CPUState *cs)
> env->exception.fsr = 0x2;
> }
> env->exception.vaddress = wp_hit->hitaddr;
> - raise_exception(env, EXCP_DATA_ABORT);
> + raise_exception(env, ARMAR_EXCP_DATA_ABORT);
> } else {
> cpu_resume_from_signal(cs, NULL);
> }
> @@ -771,7 +771,7 @@ void arm_debug_excp_handler(CPUState *cs)
> env->exception.fsr = 0x2;
> }
> /* FAR is UNKNOWN, so doesn't need setting */
> - raise_exception(env, EXCP_PREFETCH_ABORT);
> + raise_exception(env, ARMAR_EXCP_PREFETCH_ABORT);
> }
> }
> }
> diff --git a/target-arm/psci.c b/target-arm/psci.c
> index d8fafab..b5b4e7f 100644
> --- a/target-arm/psci.c
> +++ b/target-arm/psci.c
> @@ -35,12 +35,12 @@ bool arm_is_psci_call(ARMCPU *cpu, int excp_type)
> uint64_t param = is_a64(env) ? env->xregs[0] : env->regs[0];
>
> switch (excp_type) {
> - case EXCP_HVC:
> + case ARMAR_EXCP_HVC:
> if (cpu->psci_conduit != QEMU_PSCI_CONDUIT_HVC) {
> return false;
> }
> break;
> - case EXCP_SMC:
> + case ARMAR_EXCP_SMC:
> if (cpu->psci_conduit != QEMU_PSCI_CONDUIT_SMC) {
> return false;
> }
> diff --git a/target-arm/translate-a64.c b/target-arm/translate-a64.c
> index 0b192a1..4666161 100644
> --- a/target-arm/translate-a64.c
> +++ b/target-arm/translate-a64.c
> @@ -245,7 +245,7 @@ static void gen_step_complete_exception(DisasContext *s)
> * of the exception, and our syndrome information is always correct.
> */
> gen_ss_advance(s);
> - gen_exception(EXCP_UDEF, syn_swstep(s->ss_same_el, 1, s->is_ldex));
> + gen_exception(ARMAR_EXCP_UDEF, syn_swstep(s->ss_same_el, 1, s->is_ldex));
> s->is_jmp = DISAS_EXC;
> }
>
> @@ -292,7 +292,7 @@ static inline void gen_goto_tb(DisasContext *s, int n,
> uint64_t dest)
> static void unallocated_encoding(DisasContext *s)
> {
> /* Unallocated and reserved encodings are uncategorized */
> - gen_exception_insn(s, 4, EXCP_UDEF, syn_uncategorized());
> + gen_exception_insn(s, 4, ARMAR_EXCP_UDEF, syn_uncategorized());
> }
>
> #define unsupported_encoding(s, insn) \
> @@ -971,7 +971,7 @@ static inline bool fp_access_check(DisasContext *s)
> return true;
> }
>
> - gen_exception_insn(s, 4, EXCP_UDEF, syn_fp_access_trap(1, 0xe, false));
> + gen_exception_insn(s, 4, ARMAR_EXCP_UDEF, syn_fp_access_trap(1, 0xe,
> false));
> return false;
> }
>
> @@ -1498,7 +1498,7 @@ static void disas_exc(DisasContext *s, uint32_t insn)
> switch (op2_ll) {
> case 1:
> gen_ss_advance(s);
> - gen_exception_insn(s, 0, EXCP_SWI, syn_aa64_svc(imm16));
> + gen_exception_insn(s, 0, ARMAR_EXCP_SWI, syn_aa64_svc(imm16));
> break;
> case 2:
> if (s->current_el == 0) {
> @@ -1511,7 +1511,7 @@ static void disas_exc(DisasContext *s, uint32_t insn)
> gen_a64_set_pc_im(s->pc - 4);
> gen_helper_pre_hvc(cpu_env);
> gen_ss_advance(s);
> - gen_exception_insn(s, 0, EXCP_HVC, syn_aa64_hvc(imm16));
> + gen_exception_insn(s, 0, ARMAR_EXCP_HVC, syn_aa64_hvc(imm16));
> break;
> case 3:
> if (s->current_el == 0) {
> @@ -1523,7 +1523,7 @@ static void disas_exc(DisasContext *s, uint32_t insn)
> gen_helper_pre_smc(cpu_env, tmp);
> tcg_temp_free_i32(tmp);
> gen_ss_advance(s);
> - gen_exception_insn(s, 0, EXCP_SMC, syn_aa64_smc(imm16));
> + gen_exception_insn(s, 0, ARMAR_EXCP_SMC, syn_aa64_smc(imm16));
> break;
> default:
> unallocated_encoding(s);
> @@ -1536,7 +1536,7 @@ static void disas_exc(DisasContext *s, uint32_t insn)
> break;
> }
> /* BRK */
> - gen_exception_insn(s, 4, EXCP_BKPT, syn_aa64_bkpt(imm16));
> + gen_exception_insn(s, 4, ARMAR_EXCP_BKPT, syn_aa64_bkpt(imm16));
> break;
> case 2:
> if (op2_ll != 0) {
> @@ -1693,7 +1693,7 @@ static void gen_store_exclusive(DisasContext *s, int
> rd, int rt, int rt2,
> tcg_gen_mov_i64(cpu_exclusive_test, addr);
> tcg_gen_movi_i32(cpu_exclusive_info,
> size | is_pair << 2 | (rd << 4) | (rt << 9) | (rt2 <<
> 14));
> - gen_exception_internal_insn(s, 4, EXCP_STREX);
> + gen_exception_internal_insn(s, 4, ARMAR_EXCP_STREX);
> }
> #else
> static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
> @@ -11031,7 +11031,7 @@ void gen_intermediate_code_internal_a64(ARMCPU *cpu,
> * bits should be zero.
> */
> assert(num_insns == 0);
> - gen_exception(EXCP_UDEF, syn_swstep(dc->ss_same_el, 0, 0));
> + gen_exception(ARMAR_EXCP_UDEF, syn_swstep(dc->ss_same_el, 0, 0));
> dc->is_jmp = DISAS_EXC;
> break;
> }
> diff --git a/target-arm/translate.c b/target-arm/translate.c
> index 9116529..cf76a85 100644
> --- a/target-arm/translate.c
> +++ b/target-arm/translate.c
> @@ -250,7 +250,7 @@ static void gen_step_complete_exception(DisasContext *s)
> * of the exception, and our syndrome information is always correct.
> */
> gen_ss_advance(s);
> - gen_exception(EXCP_UDEF, syn_swstep(s->ss_same_el, 1, s->is_ldex));
> + gen_exception(ARMAR_EXCP_UDEF, syn_swstep(s->ss_same_el, 1, s->is_ldex));
> s->is_jmp = DISAS_EXC;
> }
>
> @@ -3039,7 +3039,7 @@ static int disas_vfp_insn(DisasContext *s, uint32_t
> insn)
> * for attempts to execute invalid vfp/neon encodings with FP disabled.
> */
> if (!s->cpacr_fpen) {
> - gen_exception_insn(s, 4, EXCP_UDEF,
> + gen_exception_insn(s, 4, ARMAR_EXCP_UDEF,
> syn_fp_access_trap(1, 0xe, s->thumb));
> return 0;
> }
> @@ -4357,7 +4357,7 @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t
> insn)
> * for attempts to execute invalid vfp/neon encodings with FP disabled.
> */
> if (!s->cpacr_fpen) {
> - gen_exception_insn(s, 4, EXCP_UDEF,
> + gen_exception_insn(s, 4, ARMAR_EXCP_UDEF,
> syn_fp_access_trap(1, 0xe, s->thumb));
> return 0;
> }
> @@ -5095,7 +5095,7 @@ static int disas_neon_data_insn(DisasContext *s,
> uint32_t insn)
> * for attempts to execute invalid vfp/neon encodings with FP disabled.
> */
> if (!s->cpacr_fpen) {
> - gen_exception_insn(s, 4, EXCP_UDEF,
> + gen_exception_insn(s, 4, ARMAR_EXCP_UDEF,
> syn_fp_access_trap(1, 0xe, s->thumb));
> return 0;
> }
> @@ -7432,7 +7432,7 @@ static void gen_store_exclusive(DisasContext *s, int
> rd, int rt, int rt2,
> tcg_gen_extu_i32_i64(cpu_exclusive_test, addr);
> tcg_gen_movi_i32(cpu_exclusive_info,
> size | (rd << 4) | (rt << 8) | (rt2 << 12));
> - gen_exception_internal_insn(s, 4, EXCP_STREX);
> + gen_exception_internal_insn(s, 4, ARMAR_EXCP_STREX);
> }
> #else
> static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
> @@ -7959,7 +7959,7 @@ static void disas_arm_insn(DisasContext *s, unsigned
> int insn)
> case 1:
> /* bkpt */
> ARCH(5);
> - gen_exception_insn(s, 4, EXCP_BKPT,
> + gen_exception_insn(s, 4, ARMAR_EXCP_BKPT,
> syn_aa32_bkpt(imm16, false));
> break;
> case 2:
> @@ -9021,7 +9021,7 @@ static void disas_arm_insn(DisasContext *s, unsigned
> int insn)
> break;
> default:
> illegal_op:
> - gen_exception_insn(s, 4, EXCP_UDEF, syn_uncategorized());
> + gen_exception_insn(s, 4, ARMAR_EXCP_UDEF, syn_uncategorized());
> break;
> }
> }
> @@ -10858,7 +10858,7 @@ static void disas_thumb_insn(CPUARMState *env,
> DisasContext *s)
> {
> int imm8 = extract32(insn, 0, 8);
> ARCH(5);
> - gen_exception_insn(s, 2, EXCP_BKPT, syn_aa32_bkpt(imm8, true));
> + gen_exception_insn(s, 2, ARMAR_EXCP_BKPT, syn_aa32_bkpt(imm8,
> true));
> break;
> }
>
> @@ -11013,11 +11013,11 @@ static void disas_thumb_insn(CPUARMState *env,
> DisasContext *s)
> }
> return;
> undef32:
> - gen_exception_insn(s, 4, EXCP_UDEF, syn_uncategorized());
> + gen_exception_insn(s, 4, ARMAR_EXCP_UDEF, syn_uncategorized());
> return;
> illegal_op:
> undef:
> - gen_exception_insn(s, 2, EXCP_UDEF, syn_uncategorized());
> + gen_exception_insn(s, 2, ARMAR_EXCP_UDEF, syn_uncategorized());
> }
>
> /* generate intermediate code in gen_opc_buf and gen_opparam_buf for
> @@ -11159,7 +11159,7 @@ static inline void
> gen_intermediate_code_internal(ARMCPU *cpu,
> if (dc->pc >= 0xffff0000) {
> /* We always get here via a jump, so know we are not in a
> conditional execution block. */
> - gen_exception_internal(EXCP_KERNEL_TRAP);
> + gen_exception_internal(ARMAR_EXCP_KERNEL_TRAP);
> dc->is_jmp = DISAS_UPDATE;
> break;
> }
> @@ -11167,7 +11167,7 @@ static inline void
> gen_intermediate_code_internal(ARMCPU *cpu,
> if (dc->pc >= 0xfffffff0 && arm_dc_feature(dc, ARM_FEATURE_M)) {
> /* We always get here via a jump, so know we are not in a
> conditional execution block. */
> - gen_exception_internal(EXCP_EXCEPTION_EXIT);
> + gen_exception_internal(ARMAR_EXCP_EXCEPTION_EXIT);
> dc->is_jmp = DISAS_UPDATE;
> break;
> }
> @@ -11216,7 +11216,7 @@ static inline void
> gen_intermediate_code_internal(ARMCPU *cpu,
> * bits should be zero.
> */
> assert(num_insns == 0);
> - gen_exception(EXCP_UDEF, syn_swstep(dc->ss_same_el, 0, 0));
> + gen_exception(ARMAR_EXCP_UDEF, syn_swstep(dc->ss_same_el, 0, 0));
> goto done_generating;
> }
>
> @@ -11276,13 +11276,13 @@ static inline void
> gen_intermediate_code_internal(ARMCPU *cpu,
> gen_set_condexec(dc);
> if (dc->is_jmp == DISAS_SWI) {
> gen_ss_advance(dc);
> - gen_exception(EXCP_SWI, syn_aa32_svc(dc->svc_imm,
> dc->thumb));
> + gen_exception(ARMAR_EXCP_SWI, syn_aa32_svc(dc->svc_imm,
> dc->thumb));
> } else if (dc->is_jmp == DISAS_HVC) {
> gen_ss_advance(dc);
> - gen_exception(EXCP_HVC, syn_aa32_hvc(dc->svc_imm));
> + gen_exception(ARMAR_EXCP_HVC, syn_aa32_hvc(dc->svc_imm));
> } else if (dc->is_jmp == DISAS_SMC) {
> gen_ss_advance(dc);
> - gen_exception(EXCP_SMC, syn_aa32_smc());
> + gen_exception(ARMAR_EXCP_SMC, syn_aa32_smc());
> } else if (dc->ss_active) {
> gen_step_complete_exception(dc);
> } else {
> @@ -11297,13 +11297,13 @@ static inline void
> gen_intermediate_code_internal(ARMCPU *cpu,
> gen_set_condexec(dc);
> if (dc->is_jmp == DISAS_SWI && !dc->condjmp) {
> gen_ss_advance(dc);
> - gen_exception(EXCP_SWI, syn_aa32_svc(dc->svc_imm, dc->thumb));
> + gen_exception(ARMAR_EXCP_SWI, syn_aa32_svc(dc->svc_imm,
> dc->thumb));
> } else if (dc->is_jmp == DISAS_HVC && !dc->condjmp) {
> gen_ss_advance(dc);
> - gen_exception(EXCP_HVC, syn_aa32_hvc(dc->svc_imm));
> + gen_exception(ARMAR_EXCP_HVC, syn_aa32_hvc(dc->svc_imm));
> } else if (dc->is_jmp == DISAS_SMC && !dc->condjmp) {
> gen_ss_advance(dc);
> - gen_exception(EXCP_SMC, syn_aa32_smc());
> + gen_exception(ARMAR_EXCP_SMC, syn_aa32_smc());
> } else if (dc->ss_active) {
> gen_step_complete_exception(dc);
> } else {
> @@ -11341,13 +11341,13 @@ static inline void
> gen_intermediate_code_internal(ARMCPU *cpu,
> gen_helper_wfe(cpu_env);
> break;
> case DISAS_SWI:
> - gen_exception(EXCP_SWI, syn_aa32_svc(dc->svc_imm, dc->thumb));
> + gen_exception(ARMAR_EXCP_SWI, syn_aa32_svc(dc->svc_imm,
> dc->thumb));
> break;
> case DISAS_HVC:
> - gen_exception(EXCP_HVC, syn_aa32_hvc(dc->svc_imm));
> + gen_exception(ARMAR_EXCP_HVC, syn_aa32_hvc(dc->svc_imm));
> break;
> case DISAS_SMC:
> - gen_exception(EXCP_SMC, syn_aa32_smc());
> + gen_exception(ARMAR_EXCP_SMC, syn_aa32_smc());
> break;
> }
> if (dc->condjmp) {
> --
> 1.9.1
>
>
- Re: [Qemu-devel] [RFC PATCH 25/34] arm: cpu: Move CPU_COMMON to front of env, (continued)
- [Qemu-devel] [RFC PATCH 26/34] arm: Use qomified tcg defintions, Peter Crosthwaite, 2015/05/11
- [Qemu-devel] [RFC PATCH 22/34] arm: Remove ELF_MACHINE from cpu.h, Peter Crosthwaite, 2015/05/11
- [Qemu-devel] [RFC PATCH 24/34] arm: delete dummy prototypes, Peter Crosthwaite, 2015/05/11
- [Qemu-devel] [RFC PATCH 23/34] arm: cpu.h: Move cpu-all include, Peter Crosthwaite, 2015/05/11
- [Qemu-devel] [RFC PATCH 20/34] configure: Unify arm and aarch64 disas configury, Peter Crosthwaite, 2015/05/11
- [Qemu-devel] [RFC PATCH 18/34] mb: cpu-qom: Put the ENV first, Peter Crosthwaite, 2015/05/11
- [Qemu-devel] [RFC PATCH 19/34] mb: Enable multi-arch, Peter Crosthwaite, 2015/05/11
- [Qemu-devel] [RFC PATCH 21/34] arm: Rename all exceptions, Peter Crosthwaite, 2015/05/11
- Re: [Qemu-devel] [RFC PATCH 21/34] arm: Rename all exceptions,
Peter Crosthwaite <=
[Qemu-devel] [RFC PATCH 16/34] mb: cpu: Guard cpu_init definition for user mode, Peter Crosthwaite, 2015/05/11
[Qemu-devel] [RFC PATCH 15/34] hw: mb: Explicitly include cpu.h for consumers, Peter Crosthwaite, 2015/05/11
[Qemu-devel] [RFC PATCH 13/34] mb: cpu: Change phys and virt address ranges., Peter Crosthwaite, 2015/05/11
[Qemu-devel] [RFC PATCH 11/34] mb: cpu: Remove MMUx macros, Peter Crosthwaite, 2015/05/11
[Qemu-devel] [RFC PATCH 14/34] mb: Use qomified tcg defintions, Peter Crosthwaite, 2015/05/11
[Qemu-devel] [RFC PATCH 17/34] mb: cpu: Multi-define guard deep CPU specifics, Peter Crosthwaite, 2015/05/11
[Qemu-devel] [RFC PATCH 08/34] mb: cpu.h: Move cpu-all include, Peter Crosthwaite, 2015/05/11