[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Axiom-developer] Aldor and Lisp

From: C Y
Subject: Re: [Axiom-developer] Aldor and Lisp
Date: Sat, 15 Oct 2005 11:56:41 -0700 (PDT)

--- Martin Rubey <address@hidden> wrote:
> Dear Cliff,
> > [...] but currently I'm still trying to figure out dimensionless 
> > dimensions and other fun.  I guess I'm kinda hoping that by the 
> > time I finally get this well enough to try any actual coding the 
> > decision as to which path to take will have been made and progress
> > begun.

> You can safely code in either Spad or Aldor. There is even an 
> automated translater from Spad to Aldor, so this route is 
> completely "safe". If you decide to code in Aldor, backporting to 
> Spad is mostly trivial --if you don't use dependent types, that is.

Heh - don't know what we will need yet for sure - I'd be a bit
surprised if it comes to that.  I'm still a ways away from coding -
along with straightening out my own ideas, I've got to review the
papers and whatnot associated with a number of other systems still. 
It's really too bad I won't be able to pass this off as a thesis
somewhere :-/.

> PLEASE stick to your project! It is important to us, as you can see 
> from that commercial guy.

I think I'm now in "stubborn" mode, which is a good sign - also I'm
resolving one of the most basic things that bothered me about units and
dimensions in undergrad physics.  I always had the sense that behind
the seeming simplicity of units and dimensions lurked a monster - and
until I came to grips with it I would never really understand units or
dimensions.  So I guess in some sense this is my bid for true

> > A mathematical proof failing for a "minor" reason as opposed to a 
> > "major" one still fails,

> As a mathematician: a "minor" mistake is one that can be fixed (by 
> myself), a "major" one is one that I don't know how to fix...

Ah :-).  True, but until even an "easy" mistake is corrected the proof
is still wrong...

[big snip]

> > mathematicians are very reluctant to accept the idea of computers 
> > as useful or reliable tools in matters such as proofs, and given 
> > the behavior of modern software its a bit hard to blame them.

> Don't worry. Most Mathematicians I know accept Computer proofs 
> meanwhile. The point rather is that we would like to *understand* 
> the proofs. And a paper and pencil proof is usually easier to 
> understand than two thousend lines of computer generated 
> calculations.

Hmm.  I'm betting human understandable proofs are probably possible
only for a small subset of what is provable.  So there are really two
distinct goals here - verify that something is true and understanding
the verification of why something is true.  I guess to me the first is
of more interest, but I see the merit of the second for creating new

> I made an experience I'd like to share: In the course of proving 
> something, I needed to prove that a certain polynomial inequality 
> holds. I only knew that my polynomial would have positive 
> coefficients, to put you into the picture. I failed. However, I 
> was able to prove it for polynomials of degree 3.


> For polynomials of degree 4, Mathematicas InequalitySolve proved
> it. Unfortunately, I don't know how, since it only answered "True".
> Of course, it did use a CAD, but it was impossible for me to see 
> what it really did.

I am not sure I would agree right off that it proved it, particularly
since you couldn't generate a list of arguments on which that proof was
founded.  All proofs have to trust SOMETHING, even if only the human
brain, and it should be clear what we are trusting.  In this case it
involves trusting Mathematica's internal algorithms and design are
fully "correct."

> I hope that we can modify Renauds package

I agree - transparency should be a design goal.
> > a) Axiom and similar pieces of software are capable of 
> > constructing proof arguments for any and all mathematical outputs.

> Again, I'm pretty sure that trying to prove the correctness of 
> Axiom itself is the wrong path. If you are *really* interested and
> have the talent, go ahead. But if you think that anybody would use 
> Axiom because it was proven correct, I'm sure that you are mistaken.

I suppose, but I view it as a way of increasing confidence in the
software.  If you want to think of it one way, the whole notion of
doing symbolic mathematics via computer involves bridging the gap
between fundamentally continuous processes (e.g. integration) and
fundamentally numerical, digital ones (the machine level workings of a
digital computer).  I think that process can stand some verification -
it's a hard job and each parts correctness is theoretically limited by
the correctness of what it relies on.

I'm hoping the day will come when to be taken seriously a program HAS
to present as much verification as can be performed.  

> > If I may ask, how come the boot step is needed?  

> As far as I know, it's a historical failure. Some people working on
> Axiom didn't like Lisps parens and prefix notation...

Arrgh.  Thank you, Fortran and friends.

> > Do we need it in the Aldor -> Lisp translation?  I understand the
> > need to limit the power of Lisp in Aldor
> No, Aldor is just as powerful as Lisp in the Turing sense. 

Hmm. OK.

> From the programmers perspective, it is not as powerful since it 
> doesn't have macros. By the way, Lisp was originally designed as a
> means to prove things about computer language. It was at first not
> intended to be actually used for programming. So, I'd say that 
> proving things about Aldor programs will be much harder than
> proving things about Lisp programs :-(

Sigh.  Figures.  Well, both will have to be done anyway someday if I
really make a go at verified results, but I have a lot to learn about
proof logic first.

> All the best and keep your Units project alive!

Oh, it lives :-).  I'm just trying to do my homework this time around! 
I'm hoping after this and Error Analysis a lot of really solid
scientific logic can be built on top of these Domains.  Error Analysis
is going to get me into REAL trouble, and probably annoy the heck out
of everybody (for example, the need to establish a way to handle and
quantify the uncertainty associated with any numerical calculation in


Yahoo! Mail - PC Magazine Editors' Choice 2005

reply via email to

[Prev in Thread] Current Thread [Next in Thread]