qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH for-2.0] cpu: do not use QOM casts in ENV_GET_CP


From: Andreas Färber
Subject: Re: [Qemu-devel] [PATCH for-2.0] cpu: do not use QOM casts in ENV_GET_CPU
Date: Fri, 28 Mar 2014 15:52:15 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.3.0

Am 26.03.2014 14:42, schrieb Paolo Bonzini:
> QOM casts are only typesafe inasmuch as we know that the argument is
> a QOM object.  If it is not, the accesses to fields in Object can
> access invalid memory and thus cause a segfault.
> 
> Using a QOM cast in ENV_GET_CPU is useless and harmful.  Useless,
> because the cast is applied to the result of container_of, which is
> type safe.  So the QOM cast is nothing but typesafety theater.
> Harmful, because ENV_GET_CPU *is* used in hot paths especially
> now that, in 2.0, the movement of fields from CPU_COMMON to
> CPUState was completed.
> 
> Reported-by: Laurent Desnogues <address@hidden>
> Cc: Andreas Faerber <address@hidden>
> Signed-off-by: Paolo Bonzini <address@hidden>
> ---
>  target-alpha/cpu-qom.h      | 2 +-
>  target-arm/cpu-qom.h        | 2 +-
>  target-cris/cpu-qom.h       | 2 +-
>  target-i386/cpu-qom.h       | 2 +-
>  target-lm32/cpu-qom.h       | 2 +-
>  target-m68k/cpu-qom.h       | 2 +-
>  target-microblaze/cpu-qom.h | 2 +-
>  target-mips/cpu-qom.h       | 2 +-
>  target-ppc/cpu-qom.h        | 2 +-
>  target-s390x/cpu-qom.h      | 2 +-
>  target-sh4/cpu-qom.h        | 2 +-
>  target-sparc/cpu-qom.h      | 2 +-
>  target-unicore32/cpu-qom.h  | 2 +-
>  target-xtensa/cpu-qom.h     | 2 +-
>  14 files changed, 14 insertions(+), 14 deletions(-)

Thanks for the reminder, Laurent. :)

This patch is missing two targets:

diff --git a/target-moxie/cpu.h b/target-moxie/cpu.h
index c5b12a5..667d751 100644
--- a/target-moxie/cpu.h
+++ b/target-moxie/cpu.h
@@ -109,7 +109,7 @@ static inline MoxieCPU
*moxie_env_get_cpu(CPUMoxieState *env
)
     return container_of(env, MoxieCPU, env);
 }

-#define ENV_GET_CPU(e) CPU(moxie_env_get_cpu(e))
+#define ENV_GET_CPU(e) ((CPUState *)moxie_env_get_cpu(e))

 #define ENV_OFFSET offsetof(MoxieCPU, env)

diff --git a/target-openrisc/cpu.h b/target-openrisc/cpu.h
index 4512f45..7d671a8 100644
--- a/target-openrisc/cpu.h
+++ b/target-openrisc/cpu.h
@@ -339,7 +339,7 @@ static inline OpenRISCCPU
*openrisc_env_get_cpu(CPUOpenRISCState *env)
     return container_of(env, OpenRISCCPU, env);
 }

-#define ENV_GET_CPU(e) CPU(openrisc_env_get_cpu(e))
+#define ENV_GET_CPU(e) ((CPUState *)openrisc_env_get_cpu(e))

 #define ENV_OFFSET offsetof(OpenRISCCPU, env)

I do wonder if it wouldn't make more sense to simply
#define CPU(obj) ((CPUState *)obj)
which would catch all uses of CPU() while still allowing to change it to
#define CPU(obj) dynamic_cast<CPUState>(obj)
or so in the future without touching dozens of places. We can obviously
also take this patch here for 2.0 and then revert and redo differently.

Opinions?

Regards,
Andreas

-- 
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg



reply via email to

[Prev in Thread] Current Thread [Next in Thread]