qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v5 28/28] fpu/softfloat: Define floatN_silence_n


From: Richard Henderson
Subject: Re: [Qemu-devel] [PATCH v5 28/28] fpu/softfloat: Define floatN_silence_nan in terms of parts_silence_nan
Date: Tue, 15 May 2018 09:14:14 -0700
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.7.0

On 05/15/2018 08:41 AM, Richard Henderson wrote:
> On 05/15/2018 06:45 AM, Alex Bennée wrote:
>>> +float64 float64_silence_nan(float64 a, float_status *status)
>>> +{
>>> +    return float64_pack_raw(parts_silence_nan(float64_unpack_raw(a), 
>>> status));
>>> +}
>>> +
>>
>> Not that I'm objecting to the rationalisation but did you look at the
>> code generated now we unpack NaNs? I guess NaN behaviour isn't the
>> critical path for performance anyway....
> 
> Yes, I looked.  It's about 5 instructions instead of 1.
> But as you say, it's nowhere near critical path.
> 
> Ug.  I've also just realized that the shift isn't correct though...

Having fixed that and re-checked... the compiler is weird.

The float32 version optimizes to 1 insn, as we would hope.  The float16 version
optimizes to 5 insns, extracting and re-inserting the sign bit.  The float64
version optimizes to 10 insns, extracting and re-inserting the exponent as well.

Very odd.


r~



reply via email to

[Prev in Thread] Current Thread [Next in Thread]