[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Axiom-developer] Call for help

From: Raymond Rogers
Subject: Re: [Axiom-developer] Call for help
Date: Sat, 25 Jul 2015 19:28:38 -0400
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.8.0

I looked at intel.pdf and brought to mind a crosscompiler I wrote way back when. Convincing a PDP-8 compiler to generate object code for something like a 6809. It's vague but I am convinced I was looking at object code and outputting differrent object code. Or some such but nevertheless there are resemblances. Aside from the semantics you seem to be trying to convert one type of code into another (of a mathematical syntax) or "decorate" existing code for something else to agree with. or ??
Questions and comments inline:
On 07/25/2015 05:40 PM, address@hidden wrote:
It depends on what horizon we're discussing. There are at least 3
"in plan" at the moment, although like everything that isn't in
running code, this is subject to change.


The near-term goal is to get the "first turn of the crank".
This first step in happening now.

Lisp Level Code

At the lisp level I've begun marking the code with two pieces of
information, a haskell-like type signature and a "level" indicator.
Lisp-level data structures will initially be used. These will get
closer to the Spad-level types (e.g. constructors) as the code gets
closer to Spad, which from the lisp viewpoint nothing more than a
domain-specific language (DSL).
So when I feed SPAD a statement it will have been previously been verified that the statement will be "correctly" interpreted and excuted. "previously verified" means that it has been proven by a third party that syntactically correct statements will result in results that match some other description.

The "level" indicator marks code with its "height" in the lisp
hierarchy. Code which is pure common lisp is marked at level 0.
Code implemented using only common lisp and level 0 code is level 1,
etc. This allows us to crawl up the code from ground-level.
I really have doubts about relying on a single "level" indicator. My experience is that more complex structures are necessary for effective operation. Unless it's just used as a project management tool.

Lisp-level code is being proven using ACL2 and side-level proofs.
That is, the "proof" is a version of the code that fits within the
ACL2 constraints (e.g. is not using floats). Some ACL2 code is
directly executable Common Lisp and will be used verbatim.
I have to look ACL2 up and find out it's relationship to Lisp.

Currently this proven code is automatically extracted at build time
and given to ACL2 to prove.
So your saying that with a few additions no translation is needed? That's good.

Spad Level Code

At the Spad level, it appears that COQ is the closest in spirit.
COQ allows dependent types (e.g. arrays with fixed length) and
executable primitives so properties can be defined by code.

Here the game is to decorate the categories with mathematical
axioms. These axioms will be (re-)stated in COQ so they can be
used in proofs. The axioms are spread across the categories and
inherited using the same hierarchy as the category structure.
Thus Group inherits from Ring, etc.
re-stated? Is this to be contained in literate programming. What I asking is why re-state? Is the input to be provided in CDQ format or is some other program acting as an inter-mediator for both COQ and SPAD?

Some categories provide default code. This will be the first
level of attack using COQ, trying to prove that the code conforms
to and correctly implements the axioms.
Okay, now we are on an internal code line and have stacked up a data-structure with "history" (i.e. this term represents at least a group) and we are asking some code A (SPAD or Axiom) to do something to it; say provide an output. The fact that this code has been previously checked by COQ will verify that code A will preform the modification as described in some third format; presumably COQ.

Machinery needs to be created to put these axioms into the
system in an effective way. For instance, )show should be able
to aggregate and display the inherited collection. The proof
extraction machine needs to be able to collect and assert the
axioms prior to proof.
Yes a hierarchy with clean goals would be nice. I have dug through some of the background code in Axiom and some readability (literate programming) would have been nice. But of course I have a simple mind and limited knowledge.

Currently this proven code is automatically extracted at build time
and given to COQ to prove. COQ contains support for things like
integer arithmetic so this will help with the circular reference
I realize I am being lazy but a pointer to an example would be appreciated.

"Other level" code

Axiom compiles to C using GCL. There is work based on LLVM that
supplies proof technology. C compiles to i86 machine code. I have
written a program which takes i86 binary code and produces
conditional-concurrent assignments (CCA) which define the machine
level semantics of each Intel instruction. The CCAs can be
combined to produce the semantics of blocks of code.

So there are the beginnings of "full stack" proof technologies
from Spad to machine code. In the near term we only care about
the Spad and Lisp code.
Now this has several levels, all of which seem to be related to cross-compiling. I would feel much more secure if all of the cross-compiling was handled by one agent; that can also be verified.


This will be a learning experience. It is expected that the effort
will uncover bugs, misconceptions, and unprovable hypothesis cases.


The near term proofs are "side-proofs" that sit alongside existing
code. However, we have complete control over the Spad-to-Lisp compiler
so it is possible to generate the proof code from the Spad code. We
also have complete control over the Lisp-to-C compiler so it is
possible to generate the proof code from the Lisp code. Even now some
ACL2 Lisp code is directly executable and I expect this trend to

This will certainly cause existing code to be reshaped, rewritten, and
reorganized. It will also make code even harder to write. But the
benefit is much higher confidence in the answers provided.

Since this will take a long time we can hope that the tools we use
will be much better. Quite a few things have changed since I used
NTHQM back in the distant past. I expect they will continue to change
for the better.
I can't remember or find NTHQM; I presume it's related to the history of program verification (Sandia?).


Computational mathematics is more constrained than general mathematics.
Proofs need to be constructive and need to account for more details.
That said, it is still mathematics. We can and should demand that
Axiom is "correct" for some provable version of "correct".
Oh yes: we would like to be "Standing on the shoulders of giants" not being trampled underneath.

Already there are proofs of some algorithms. I have a proof of
Buchberger's algorithm. Ideally this proven code and its associated
proof can be directly inserted into Axiom.

This is a lot of work. Fortunately Axiom has a 30 year horizon so
there is plenty of time.


One further comment along the literate program lines. Unless the organization and references are transparent, the project will be tower of Babel with the same consequences. Although I think the world is complex I do know that for human projects organization is key to continuity.
Euclid is my hero in math :):)  Someone I appreciate more and more.

 Two views on life:
life is an art not to be learned by observation.
George Santayana:Interpretations of Poetry and Religion
It's kinda nice to participate in your life
Raymond Rogers

reply via email to

[Prev in Thread] Current Thread [Next in Thread]