qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH 2/3] tcg: Add support for fence generation i


From: Pranith Kumar
Subject: Re: [Qemu-devel] [RFC PATCH 2/3] tcg: Add support for fence generation in x86 backend
Date: Wed, 25 May 2016 15:56:20 -0400

Hi Richard,

Thank you for the helpful comments.

On Wed, May 25, 2016 at 1:35 PM, Richard Henderson <address@hidden> wrote:
> On 05/24/2016 10:18 AM, Pranith Kumar wrote:
>> diff --git a/tcg/i386/tcg-target.h b/tcg/i386/tcg-target.h
>> index 92be341..93ea42e 100644
>> --- a/tcg/i386/tcg-target.h
>> +++ b/tcg/i386/tcg-target.h
>> @@ -100,6 +100,7 @@ extern bool have_bmi1;
>>  #define TCG_TARGET_HAS_muls2_i32        1
>>  #define TCG_TARGET_HAS_muluh_i32        0
>>  #define TCG_TARGET_HAS_mulsh_i32        0
>> +#define TCG_TARGET_HAS_fence            1
>
>
> This has to be defined for all hosts.

OK. I will add an entry in tcg.h with default 0 and override in
individual architecture once it is implemented.

>> @@ -347,6 +347,7 @@ static inline int
>> tcg_target_const_match(tcg_target_long val, TCGType type,
>>  #define OPC_SHRX        (0xf7 | P_EXT38 | P_SIMDF2)
>>  #define OPC_TESTL      (0x85)
>>  #define OPC_XCHG_ax_r32        (0x90)
>> +#define OPC_MFENCE      (0xAE | P_EXT)
>
> Why define OPC_MFENCE if you're not going to use it?  Of course, it's not
> exactly a complete and useful definition, so maybe just delete OPC_MFENCE.

I want to use OPC_MFENCE instead of hard-coding the value in
tcg_out_fence(), but as you said the definition is not complete(it
currently generates only 0x0FAE). I am trying to figure out how to
generate 0x0FAEF0 using the definition.

>
> Also, for 32-bit you need to check for sse2 before outputting this.  See
> also the existing cpuid checks in tcg_target_init and the fallback smp_mb
> definition for pre-gcc-4.4.

OK, I'll check the current code and do something similar.

Thanks,
-- 
Pranith



reply via email to

[Prev in Thread] Current Thread [Next in Thread]