lmi
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lmi] Integer overflow warnings in bourn_cast with clang


From: Greg Chicares
Subject: Re: [lmi] Integer overflow warnings in bourn_cast with clang
Date: Thu, 13 Apr 2017 01:10:05 +0000
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Icedove/45.6.0

On 2017-04-06 15:24, Greg Chicares wrote:
> On 2017-04-06 00:24, Vadim Zeitlin wrote:
[...]
>>  Still, it does seem wrong to add 1 to the maximally representable value of
>> type "To" without being certain that it is _strictly_ less than that of
>> type "From".

Thanks for exposing that tacit assumption. Like other comparisons in
this code, this one is problematic. We could add one and then test
whether we exceeded the limit--but if that exceeds the limit, then
adding one was UB, so the test is invalid. Yet subtracting one from
the limit might have no effect, and a value that is strictly greater
might compare equal. Instead, I worked with some of the less commonly
used members of std::numeric_limits in commit 63a32a8.

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1879.htm

| properly determining whether a source value is in range for a given
| destination type is tricky because involves comparing a value of a
| type with the boundary values of a different type, task that itself
| requires a numeric conversion. This is in fact so difficult that
| boost::numeric_cast<> had it wrong for certain cases for years,
| even after a number of revisions.

[it's still wrong in some cases we've documented, and we haven't
contemplated ones' complement or floating-point radix != 2--but the
ensuing conclusion is absolutely correct...]

| A numeric_cast<> is therefore a perfect candidate for standardization
| because it is widely needed but its implementation much too difficult
| to ask users to roll their own

Amen.

> Good point. I'll see whether the gcc version I'm using has __int128.

It does, but...

    __int128 x = std::numeric_limits<__int128>::max();
    long double y = x;
    std::cout.precision(50);
    std::cout << y << std::endl;
0

We could do something like this, to calculate 1+XINT_MAX directly
for any integral type X:

    From const x = std::ldexp(From(1), to_traits::digits);
    From const max = x - 1;
    From const min = to_traits::is_signed ? -x : 0;

and add refinements for ones' complement and non-binary radixes,
but at this point we'd be calculating constants like INT_MAX from
first principles, and that's kind of crazy. This is a job that
really should be done by each C++ implementation, if only the
standards committee would adopt N1879--getting this really right
in the general case with a non-standard library is just too hard.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]