help-octave
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Catastrophic Cancellation


From: A. Kalten
Subject: Re: Catastrophic Cancellation
Date: Wed, 2 Jul 2008 12:08:31 -0400

On Wed, 2 Jul 2008 10:36:22 -0400 (EDT)
Przemek Klosowski <address@hidden> wrote:

> You are evaluating an expression that loses precision near zero,
> and dividing by something very small near zero, which amplifies the
> loss of precision. Octave even spits a 'divide by zero' error,
> although it's a little bit of a lucky break.

I think that the FPU will generate a "divide by zero" exception
that is passed along to the software.  It is then up to the
software to handle this exception.  Octave does the correct
thing by just reporting the condition and continuing to process.
After all, the limit of this expression does exist (= 0.5) at zero.

But this particular example is designed specifically for machines
with 16-bit precision.  If we rewrite the same routine using long
doubles, which with glibc on Linux has roughly 19-bits, the problem
disappears except for a tiny region around zero.  Soon, the gcc
compiler and glibc will support the float128 data type which will
provide 32 bits of precision.

More bits, however, will never eliminate the problem.  The question
is how a program can be written to avoid such error.  Does every
subtraction have to tested to determine if the operands are close
to machine precision?  There are some standard tricks to avoid
certain problems but unless I am mistaken there are no completely
general methods available.

Anyway, the problem is interesting and I hope to study this
further.

AK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]