axiom-developer
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Axiom-developer] Axiom January 2010 released


From: Tim Daly
Subject: Re: [Axiom-developer] Axiom January 2010 released
Date: Sat, 30 Jan 2010 19:16:41 -0500
User-agent: Thunderbird 2.0.0.21 (Windows/20090302)

Some of the algorithms are purely random so the result
has nothing to do with GCL.

Actually, there is behavior in floating point that differs
from platform to platform. Run
 grep machineFraction src/input/*

The floating point behavior differs from platform to platform.
I wrote 'machineFraction' to get at the actual bits in the
register (it uses common lisp's integer-decode-float, see
the code in books/bookvol5.pamphlet, e.g. integer-decode-float-numerator)

In src/input/dfloat.input.pamphlet you can see that

machineFraction(2.71828)

is not equal to

machineFraction(address@hidden)

on Ubuntu, but they are equal on some other linux platforms.
They should be equal.

Why these differ is on the list to debug.

Camm Maguire wrote:
Greetings, and thanks!

Tim Daly <address@hidden> writes:

...

At the very end of the build Axiom scans the regress files looking for
any that fail.
Some can fail due to the fact that the algorithm is random. Others
fail for various
reasons. I check these failure cases after every Axiom build.

Any failures either get fixed in the next update or get added to the
known bugs list.

So, to answer your question, there are cases where the Axiom input
fails intentionally
and the failing output is captured. This way it is possible to test
known bad inputs.
The regression test system only cares that the answers are the same,
it does not care
that the Axiom input is a known, intentional failure.

Thus, the regression test system can test both successful and failing
input but will
only complain if some output did not match the previous results.


Do I understand correctly, then, that as some failures are random, and
will therefore are expected to produce differences with previous
output, there is no way to really catch a case, say, where gcl is
blowing the algorithm on a new platform?  If so, a wishlist item would
be to add some flag that would indicate an expected stochastic
failure.  Then an autobuilder could be instructed to demand strict
compliance with some known output, or fail.

Take care,

Tim


Camm Maguire wrote:
Congratulations, Tim!

Separate question -- I notice there are many regression failures in
the build log.  Do you have any mechanism to identify "expected"
failures?  Or is there any other automated way to ensure the
correctness of the build?  For example, with maxima, the autobuilder
is instructed to abort if there are any "unexpected" test suite
failures.
Take care,

Tim Daly <address@hidden> writes:

The Axiom Version January 2010 has been released to
the source code servers:

 from github:      git clone git://github.com/daly/axiom.git
 from savannah:    git clone address@hidden:/srv/git/axiom.git
 from sourceforge: |git clone
git://axiom.git.sourceforge.net/gitroot/axiom/axiom|

(commit: 7aa4ca083d79f63bc46b0c21af41e9d79191f390)

Binaries will become available as time permits at:
http://axiom-developer.org/axiom-website/download.html

Tim


_______________________________________________
Axiom-developer mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/axiom-developer




_______________________________________________
Axiom-developer mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/axiom-developer









reply via email to

[Prev in Thread] Current Thread [Next in Thread]