qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH] tcg: Optimize fence instructions


From: Pranith Kumar
Subject: Re: [Qemu-devel] [RFC PATCH] tcg: Optimize fence instructions
Date: Tue, 19 Jul 2016 14:29:56 -0400

Alex Bennée writes:

> Pranith Kumar <address@hidden> writes:
>
>> This patch applies on top of the fence generation patch series.
>>
>> This commit optimizes fence instructions. Two optimizations are
>> currently implemented. These are:
>>
>> 1. Unnecessary duplicate fence instructions
>>
>>    If the same fence instruction is detected consecutively, we remove
>>    one instance of it.
>>
>>    ex: mb; mb => mb, strl; strl => strl
>>
>> 2. Merging weaker fence with subsequent/previous stronger fence
>>
>>    load-acquire/store-release fence can be combined with a full fence
>>    without relaxing the ordering constraint.
>>
>>    ex: a) ld; ldaq; mb => ld; mb
>>        b) mb; strl; st => mb; st
>>
>> Signed-off-by: Pranith Kumar <address@hidden>
>> ---
>>  tcg/optimize.c | 59 
>> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>>  tcg/tcg.h      |  1 +
>>  2 files changed, 60 insertions(+)
>>
>> diff --git a/tcg/optimize.c b/tcg/optimize.c
>> index c0d975b..a655829 100644
>> --- a/tcg/optimize.c
>> +++ b/tcg/optimize.c
>> @@ -569,6 +569,63 @@ static bool swap_commutative2(TCGArg *p1, TCGArg *p2)
>>      return false;
>>  }
>>
>> +/* Eliminate duplicate and unnecessary fence instructions */
>> +void tcg_optimize_mb(TCGContext *s)
>> +{
>> +    int oi, oi_next;
>> +    TCGArg prev_op_mb = -1;
>> +    TCGOp *prev_op;
>
>
> The compiler throws up warnings about prev_op not being set:
>
> /home/alex/lsrc/qemu/qemu.git/tcg/optimize.c: In function ‘tcg_optimize_mb’:
> /home/alex/lsrc/qemu/qemu.git/tcg/optimize.c:611:17: error: ‘prev_op’ may be 
> used uninitialized in this function [-Werror=maybe-uninitialized]
>                  tcg_op_remove(s, prev_op);
>                  ^

OK, I will fix this.

Thanks,
-- 
Pranith



reply via email to

[Prev in Thread] Current Thread [Next in Thread]