avr-libc-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[avr-libc-dev] [bug #30363] _delay_xx() functions in <util/delay.h> are


From: Bill Perry
Subject: [avr-libc-dev] [bug #30363] _delay_xx() functions in <util/delay.h> are broken
Date: Thu, 07 Oct 2010 00:14:27 +0000
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.10) Gecko/20100915 Ubuntu/9.04 (jaunty) Firefox/3.6.10

Follow-up Comment #10, bug #30363 (project avr-libc):

Other than loop/cycle count truncation was the previous way of doing things,
what is the reasoning for preferring truncation to be the new default vs
"round up" as the default going forward?

The reason I ask is that given the truncation boundary is different than the
previous one and the behavior will be
slightly different anyway, does it still make sense to make truncation the
default?

I can live with anything being the default as long as there is a way to tune
it to "round up".
But my preference would be to make "round up" the default case, because it
ensures the delay is always something and will be at least as long as you ask
for without having to have any worries or special cases for 0 cycles.
"round up" as a default would require the user to have to define something to
ever get a delay that is less than what is asked for.

==

(truncation and rounding do have a few potential backward compatibility
issues when cycle count falls below 1)

I am worried about delays being reduced or even eliminated due to cycle count
truncation to 0 or rounding cycle count down to 0 because this is creating a
different behavior than the previous code.
I am concerned that this total elimination of delays will break some code
that is accidentally depending on the delay from _delay_us() being rounded up
to 3 cycles because this new code will completely eliminate some delays that
previously were 3 cycles.

For example, if somebody called _delay_us(.500) and was running on 1Mz
clock.
Previously they got a 3us delay, but if truncation is the
default and there is no bump to a non zero number of cycles,
they would get no delay, which might not work.

My assumption is that nobody calls _delay_xx() unless they
really wanted/needed some sort of delay, even if just 1 cycle. So I'm
wondering if it is ever desirable to completely eliminate a delay?
i.e. should the minimum delay always be at least 1 cycle?

I'm not sure what the answer to this is to make the delay
code better and more accurate going forward without breaking some older
code.
But my main concern is the effects of eliminating delays completely.

Even making the minimum delay 1 cycle vs 3 cycles could potentially break
some code, but for some reason, to me, a 1 cycle minimum seems to feel a bit
better than totally eliminating a delay.
But I'll admit I may not be the best source of input on this as I will always
be using "round up".

===

I'm assuming ceil() is a gcc builtin or 
could be something like this?

#define _delay_ceil(x) 
 (x == (uint32_t)x ? (uint32_t)x : (uint32_t)x + 1)

==========================================================
Long delays:
On the longer delays. I'm with you on this. I don't see
the need to super long delays either, but thought I'd ask.
My focus has been to ensure that the delay functions work on the very short
end for hardware setup timing where there is no other alternative to using CPU
spin loops.

==========================================================
Backward compatibility:
As mentioned above, my biggest concern is at the short end
where code may be "accidentally" working by getting additional
delay cycles. There are bound to be some issues, where
people hand tuned things or depend on the "odd" 3/4 cycle
rounding and I think will be no way to guarantee not breaking those (unless
there is a backward compatibility mode).
For the short end, there are a few options:
1) let it fall to 0 (eliminate the delay) - this worries me.
2) Force it to be 1 cycle if it falls to 0
3) have another define that sets the MIN cycles so user
   can create some backward combility by defining it to 3 (or 1)
   if necessary. So if this define is not set, it defaults
   to allowing 0 (no delay).

4) have a define to return to old behavior.
   If it was really required or we were really worried about it,
   there could be a backward compatibility define that forced
   it back to the way it works now. i.e. truncation and odd
   rounding to 3/4 cycle/loop boundaries.


==========================================================
supporting delays using variables (non constants).

Yes this is unrelated to this bug. 
And does deserve its own bug report. 
This is the modification of the existing functions or
creation of new functions to allow users to create delays
that are specified by using a variable vs a constant.

i.e (ignore that optimizer might convert this to const in this simple case)

int x;
  x = 10;
  _delay_ms(x);

Maybe this is best handled by new library functions like
delay_ms(int x);
that is a wrapper around _delay_ms();

Anyway, you are right in that it is not related to this bug
and for as simple as it sounds there are actually quite
a few issues/challenges to make it fully work in the general case.

--- bill

    _______________________________________________________

Reply to this item at:

  <http://savannah.nongnu.org/bugs/?30363>

_______________________________________________
  Message sent via/by Savannah
  http://savannah.nongnu.org/




reply via email to

[Prev in Thread] Current Thread [Next in Thread]