[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: can't reproduce documented overflow behavior of _delay_ms()

From: Joerg Wunsch
Subject: Re: can't reproduce documented overflow behavior of _delay_ms()
Date: Mon, 27 Jan 2020 22:18:34 +0100

As Britton Kerin wrote:

>   If the avr-gcc toolchain has __builtin_avr_delay_cycles() support,
> maximal possible delay is
>   4294967.295 ms/ F_CPU in MHz. For values greater than the maximal
> possible delay,
>   overflows results in no delay i.e., 0ms.
> It looks like the result is now the maximum delay, rather than 0ms.
> Perhaps __builtin_avr_delay_cycles() has changed?

No, obviously, the documentation is wrong, and the delay functions
clip the delay value at a __builtin_avr_delay_cycles() value of
UINT_MAX rather than setting it to 0.

However, I just revisited the C standard on this.  All this is simply
undefined behaviour: the internal calculation is performed with a
"double" argument type, which is eventually then converted to
uint32_t, thereby overflowing the uint32_t domain.

The C standard says: Real floating and integer

1 When a finite value of real floating type is converted to an integer
  type other than _Bool, the fractional part is discarded (i.e., the
  value is truncated toward zero). If the value of the integral part
  cannot be represented by the integer type, the behavior is

So this explains why the documentation claims 0 but we now see an
argument of UINT_MAX.

Feel free to file a documentation bug for that.  I don't think nobody
will use such long delays in any practical application, but the
documentation ought to be correct, mentioning that overflowing the
respective integer domain yields undefined behaviour.

cheers, Joerg               .-.-.   --... ...--   -.. .  DL8DTL

Never trust an operating system you don't have sources for. ;-)

reply via email to

[Prev in Thread] Current Thread [Next in Thread]