avr-libc-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: can't reproduce documented overflow behavior of _delay_ms()


From: Joerg Wunsch
Subject: Re: can't reproduce documented overflow behavior of _delay_ms()
Date: Tue, 4 Feb 2020 23:05:26 +0100

As Georg-Johann Lay wrote:

> hmmm.  So there is more work to do for double support?  We definitely do not
> want double here but float instead...

I disagree.

This is all just happening in temporary variables inside of "static
inline" functions, and thus completely optimized away by the compiler.
Thus, the actual argument type doesn't really matter.

With optimizations turned off, all bets for the various _delay*
functions are gone anyway.

It's just the artifact we are being trapped here that overflowing a
conversion from float/double (no matter which) to an integral type is
causing undefined behaviour. Apparently, in the past this caused a
result of 0 (which seems to have been empirically evaluated by its
time, and then recorded as a matter of fact in our documentation),
where it now seems to results in the largest possible value for the
target type (about 269 s delay at 16 MHz). Being undefined behaviour,
the compiler would have been free to just use 42 as well ...

-- 
cheers, Joerg               .-.-.   --... ...--   -.. .  DL8DTL

http://www.sax.de/~joerg/
Never trust an operating system you don't have sources for. ;-)



reply via email to

[Prev in Thread] Current Thread [Next in Thread]