axiom-developer
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Axiom-developer] Axiom January 2010 released


From: Tim Daly
Subject: Re: [Axiom-developer] Axiom January 2010 released
Date: Sat, 30 Jan 2010 08:36:16 -0500
User-agent: Thunderbird 2.0.0.21 (Windows/20090302)

There are failures in the build log but they are mostly expected failures.
They are designed to test deliberately failing cases.

The design of the regression test system is that you create an input file,
capture the output, and append it after the command with a prefix.

So

1+1

in an input file generates

 (1) 2
                 Type: PositiveInteger

Now you construct an input file that has some special comments
around this test case (see any file in src/input/*.input.pamphlet):

--S 1 of 32
1+1
--R  (1) 2
--R                Type: PositiveInteger
--E 1

Since anything with "--" is an axiom comment almost all of the lines
are ignored when the file is read. If you read the file and spool the
output you will see:

--S 1 of 32                                     <-- the testcase number
1+1                                             <-- the axiom input
                                               <-- the actual output
 (1) 2                                         <--
                 Type: PositiveInteger         <--
--R                                             <-- the expected output
--R  (1) 2                                      <--
--R                Type: PositiveInteger        <--
--E 1                                           <-- end of this testcase

We see the input (1+1) and the Axiom output. We also have comments that show
the expected Axiom output. There is a function called "regress" which will take this spool output file (say, foo.output") and compare the actual Axiom output from the expected Axiom
output.

)lisp (regress "foo.output")

It lists the successful and the failing test cases (look in int/input/*.regress)

At the very end of the build Axiom scans the regress files looking for any that fail. Some can fail due to the fact that the algorithm is random. Others fail for various
reasons. I check these failure cases after every Axiom build.

Any failures either get fixed in the next update or get added to the known bugs list.

So, to answer your question, there are cases where the Axiom input fails intentionally and the failing output is captured. This way it is possible to test known bad inputs. The regression test system only cares that the answers are the same, it does not care
that the Axiom input is a known, intentional failure.

Thus, the regression test system can test both successful and failing input but will
only complain if some output did not match the previous results.

Tim


Camm Maguire wrote:
Congratulations, Tim!

Separate question -- I notice there are many regression failures in
the build log.  Do you have any mechanism to identify "expected"
failures?  Or is there any other automated way to ensure the
correctness of the build?  For example, with maxima, the autobuilder
is instructed to abort if there are any "unexpected" test suite
failures.
Take care,

Tim Daly <address@hidden> writes:

The Axiom Version January 2010 has been released to
the source code servers:

 from github:      git clone git://github.com/daly/axiom.git
 from savannah:    git clone address@hidden:/srv/git/axiom.git
 from sourceforge: |git clone
git://axiom.git.sourceforge.net/gitroot/axiom/axiom|

(commit: 7aa4ca083d79f63bc46b0c21af41e9d79191f390)

Binaries will become available as time permits at:
http://axiom-developer.org/axiom-website/download.html

Tim


_______________________________________________
Axiom-developer mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/axiom-developer









reply via email to

[Prev in Thread] Current Thread [Next in Thread]