autoconf
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Autoconf manual's coverage of signed integer overflow & portability


From: Russell Shaw
Subject: Re: Autoconf manual's coverage of signed integer overflow & portability
Date: Wed, 03 Jan 2007 15:58:37 +1100
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.8) Gecko/20061105 Iceape/1.0.6 (Debian-1.0.6-1)

Richard Kenner wrote:
A few comments:

Many portable C programs assume that signed integer overflow wraps around
reliably using two's complement arithmetic.

I'd replace "portable C programs" with "widely-used C programs".  The normal
use of "portable" means that it conforms to the standard.

Conversely, in at least one common case related to overflow, the C standard
requires behavior that is commonly not implemented.

To what does this refer, the (x * 10 / 5) case?

In languages like C, unsigned integer overflow reliably wraps around modulo
the word size.

You mean modulo 2 ** word size.

This is guaranteed by the C standard and is portable in practice, unless
you specify aggressive optimization options suitable only for special
applications.

Not sure what the ", unless" part means.  Any conforming C compiler must
support this, so there's no need to qualify by that clause or even "in
practice".  This is portable and supported by all C compilers, period.

Ideally the safest approach is to avoid signed integer overflow
entirely.  For example, instead of multiplying two signed integers, you
can convert them to unsigned integers, multiply the unsigned values,
then test whether the result is in signed range.

Rewriting code in this way will be inconvenient, though, particularly if
the signed values might be negative.  Also, it will probably hurt
performance.

Why would it hurt performance?  Conversions between signed and unsigned are
noops and signed and unsigned arithmetic generate identical instructions.
You should get exactly the same generated code by doing this.  And if you
bury it in macros, it isn't even particularly inconvenient.

If your code uses an expression like @code{(i * 2000) / 1000} and you
actually want the multiplication to wrap around reliably, put the
product into a temporary variable and divide that by 1000.  This
inhibits the algebraic optimization on many platforms.

I'd be dubious about including this.  Basically the only reason GCC doesn't
optimize the temporary variable case is because there's no tree-level
combiner.  But there will be some day.  Also, the out-of-ssa pass could
presumably do this.  With gimplification, there's very little difference for
most optimizers (except the constant folder, which just so happens to be the
relevant one here) between temporaries in user code and those created by the
gimplifier.

The chance of somebody actually neededing wraparound semantics on such an
expressions strikes me as vanishingly small and if they do need it, the ought
to be able to figure out valid ways of writing it to get it.

Wrap-around is very useful for digital signal processing.

If the intermediate results of a digital filter wraps around but the final
result is in range, you'll still get the correct answer.

I use these wrap-around semantics all the time. Desktop pc programmers may
never use it, but embedded system and dsp programmers will.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]