axiom-math
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Axiom-math] How reads axiom Expressions ?


From: root
Subject: Re: [Axiom-math] How reads axiom Expressions ?
Date: Mon, 13 Oct 2003 13:52:01 -0400

François

You could probably define a "Treeform" domain that keeps the
results in a tree representation. Then you could define the
mupad operations in this representation and it would print
exactly like mupad. I have never used mupad so I can't really
say what those operations are. The Cohen book gives a short
introduction to the treefrom representation and a set of
simplification rules.

Simplification is a complex subject. Tree representations 
lie about the space of available results. I'll give you some
thoughts to the issues involved. 

The issue was how to correctly perform simplification. In particular,
we'd like to raise simplification to a user controlled level rather
than keep it embedded in the interpreter.

Ordinary hand simplification ignores several issues which a system like
Axiom brings to light. 

Suppose we try a naive approach. We could decorate each domain (those
parts of Axiom which have an internal representation) with a function
that would return the internal representation recursively expanded
into a lisp s-expression. We could also give each domain a function
that takes this s-expression and tries to simplify it. But we run into
an issue. What would be the return type of such a simplify function?
For instance, if we had a complex expression that simplified to an
integer it would be useful to also "simplify the type" by contracting
the return value to Integer. This is generally done by magic in the
interpreter but we're trying to codify simplification.

This raises a couple general observations. 

First, simplification is tightly bound to the types. Hand manipulation
of equations shows this. For example, if we take the equation:

  x  = 1

where x is a SYMBOL and change it to 

  x - 1 = 0

we find that we have to "lift" both sides of the equation to a type
that has subtraction which SYMBOL does not have. So each side becomes
a monomial from POLYNOMIAL and then we can find the subtraction operation.
Thus we have a "calculus of the types" required for simplification.

Second, simplification is tightly bound to provisos. If we take an equation

  x = 1

and divide thru by x we need to decorate the result with the proviso
"provided x != 0" thus:

   x     1
( --- = --- , provided x != 0 )
   x     x

we can add a second level of provisos if we divide thru by y thus:


    x     1
(( --- = --- , provided x != 0) , provided y != 0 )
   x y   x y

Since x is independent of y these can be combined "at the same level" as:


   x     1
( --- = --- , provided x != 0, y != 0 )
  x y   x y

clearly there is a "logical calculus of the provisos". In particular, if we
have interval arithmetic it is possible for provisos to vanish. For instance,
if we have an additional proviso that "x in [1,10]" then the "x != 0"
proviso can (possibly) vanish.

Provisos also allow for piecewise defined functions to occur

( y = 1 provided x < 0, y = 0 provided x >= 0 )

Provisos also allow for multiple answers from a single simplification
(see abramowitz and stegun).

Also, Bill Sit points out another simplification issue. We need a 
"canonical element" of a domain (e.g "a" where a is an integer) so
we can say:

 a - 2 

and still do the arithmetic within the domain Integer.

The recent reference I mentioned for simplification is 
Cohen, Joel "Computer Algebra and Symbolic Computation. Mathematical Methods"
A. K. Peters, LTD 2003 ISBN 1-56881-159-4

This is just a simple summary and probably too abbreviated to make sense.
But it does give you an idea that simplification is very hard.

Tim
address@hidden
address@hidden





reply via email to

[Prev in Thread] Current Thread [Next in Thread]