"Ondrej Certik" <address@hidden> writes:
| > The usual example is:
| >
| > (1/2)*x^2 + (1/3)*x + 3
| > Type: Polynomial Fraction Integer
| >
| > %::Fraction(Polynomial(Integer))
| >
| > 2
| > 3x + 2x + 18
| > -------------
| > 6
| > Type: Fraction Polynomial Integer
| >
| >
| > When you read a maths book most of the equations are actually just
| > "icons". The real meaning of the equation is in the surrounding text
| > and context. Axiom carries that surrounding information in the type
| > whereas other systems focus on the "icons" and syntactic manipulations.
|
|
| I have this question:
|
| The above expressions can be assembled just from these 4 types (or
| classes in SymPy): Add, Mul, Pow, Integer
|
| what is the advantage to introduce more classes, like Polynomial
| Fraction Integer or Fraction Polynomial Integer?
I see Add, Mul, Pow, and Integer as members of the low-level Symbolic
and Algebraic Instruction Set (similar to usual ISA for chips). So
your question, if I understanding it correctly, is what is the
advantage of writing the above in a high level typed language, over a
simple stream of assembly instructions.
I suspect a definitive convincing argument is hard to make on a very
simple example like the above -- though it is definitively convincing
to the eyes of the seasoned CAS implementer and practioner.
However, like is the case for uses of strongly typed programming
languages in software development, I suspect the answer is in the use
of programming-in-the-large, i.e. developing libraries to attack
larger and harder problems in Computational sciences. My usual
references for rationale of types in software development are these
two excellent papers of Luca Cardelli:
* On Understanding Types, Data Abstraction, and Polymorphism
http://lucacardelli.name/Papers/OnUnderstanding.A4.pdf
* Typeful Programming
http://www.lucacardelli.name/Papers/TypefulProg.pdf
| Because having the expression in the form of the 4 basic classes (Add,
| Mul, Pow, Integer), everything else can be easily inferred from this.
Tim mentioned the problem of zero-equivalence, which is known to be
undecidable in general. However, they are classes of expressions for
which it is decidable and algorithms are known. Types let you easily
identify and *label* members of those expressions, *independently of
they low-level representations* in terms of the Symbolic and Algebraic
Instruction Set. The use of type conveys additional meaning, usually
lost in explanations (or surrounding text as Tim put it), into object
representations. You may be able to infer those meaning through
through ad-hoc informal reading, such as "it is obvious that
(1/2)*x^2 + (1/3)*x + 3
and
2
3x + 2x + 18
-------------
6
represents the same entity and only the representation changes.
However, the former attaches to the representation the crucial
information that it is polynomial over fractions of integer, and the
latter is a fraction of polynomial over integers. Those crucial
information are part of the representation of the object, and not
something left to be inferred by an external tool that would rely
on "common sense" or "ad hoc reading". Whether those information are
relevant or not depend on the context of uses. And definitely in a
large scale library or problem, they do matter. For example, they act
as control over which operations can be meaningfully applied to the
objects. Again, I refer back to Luca Cardelli's paper about the value
of types in software development.
-- Gaby