octave-bug-tracker
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Octave-bug-tracker] [bug #60539] Slow performance of betaincinv.m


From: Michael Leitner
Subject: [Octave-bug-tracker] [bug #60539] Slow performance of betaincinv.m
Date: Thu, 27 May 2021 12:24:36 -0400 (EDT)
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0

Follow-up Comment #15, bug #60539 (project octave):

The behaviour you show in comment 13 would be unexpected. However, I cannot
reproduce it (using your biinv.m and the literal definition bady =
5.717842473940138e-07) -- it exits after iter 14, with the same x (same
according to the displayed accuracy). However, if I go to the next-highest
double number bady1=bady+6e-23 (and display x to more digits) I do see three
additional steps where max (df) stays 5.4e-20 and x increases each time. 

The point seems to be that betainc has a peculiar behaviour that I would
actually classify as a bug: consider the following code


x=0.00023917730762341909-(0:10)'*.5e-19;
printf("%.17e\n",x)
printf("%.17e\n",betainc(x,2,4))


You will see that all these x are different (eps(0.000236) is 2.7e-20, less
than the step in x). However, betainc outputs only two different values for y
on these x (with a step of 20 eps between them), even though double precision
would without problems have allowed a distinct value for each one. 

So what happened was the following: you started with some y that for the
initial 14 iteration gave good convergence (as expected), but at that point
gave an x that was still slightly too large. It computed (by the Newton
algorithm, with an explicit expression for the derivative that should be
accurate to the last eps) the necessary step size. If betainc had work
correctly, it would have signalled that the new y is below the target value,
the step direction would have been reversed, and the loop would have been
aborted. However, in my case betainc did not update the emitted y over three
iterations, thus the algorithm again and again performed the step (with the
same length, as the supposed error was always the same), until finally betainc
did give a lower value, when it aborted. And in your case it was just
particularly bad luck that you had some 14 different x for which betainc
always gave the same y. 

So I would say what we have to do here is correct the stopping criterion of
betainc, so that it really gives the best answer (closest representable number
to the actual true result). Then such behaviour could not happen. 

If you say tol=eps(class(y)), you get a guaranteed accuracy of 2.2e-16 (for
doubles), irrespective of y. That's not bad, and considering how the present
betainc behaves, perhaps not more than can be expected, but the algorithm
itself could in principle be much more accurate, specifically for small y and
thus small x -- e.g. for y=1e-4 and alpha=4, your code would only give four
digits of accuracy, it seems to me. Note also that this would defeat the
purpose of the option "upper", where the only sense I see in providing that
possibility is a question of accuracy for large y and beta>1. But yes, your
code (after wrapping step by abs() in the condition) would work.

Another stopping criterion that should give answers that are about as accurate
as the present one but is efficient also for the deficient betainc would be to
break if the absolute value of step has not decreased any more. Due to how I
choose the starting points, the step length has to strictly decrease (in
infinite-precision arithmetics), and if it does not, then it was either zero
(in which case we can savely stop) or we have hit above problem. Could you try
that?

Yes, it would be possible to decide whether the minimum is to the left or to
the right, this should depend on which of alpha or beta are smaller or larger
than one. Yes, if your y is either close to zero and your alpha is large, or
if your y is close to one and your beta is large, you will have quite a number
of steps where the Newton algorithm converges slowly, because the derivative
changes so strongly. In such a case, even bisection would converge faster. But
the problem is that you do not know when in general you have to switch from
bisection to Newton (which converges much faster around the eventual
solution). So it is not worth the effort, as one bisection step takes as long
as a Newton step (it is the computation of betainc that has the highest
computational effort, I am certain, not the derivative). And further, we would
lose the certainty that we are on the correct side of the eventual solution,
and we would again have to check whether a step takes us out of the interval
(0,1) -- note that this was the initial bug here.


    _______________________________________________________

Reply to this item at:

  <https://savannah.gnu.org/bugs/?60539>

_______________________________________________
  Message sent via Savannah
  https://savannah.gnu.org/




reply via email to

[Prev in Thread] Current Thread [Next in Thread]