axiom-developer
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Axiom musings...


From: Tim Daly
Subject: Re: Axiom musings...
Date: Fri, 7 Feb 2020 00:57:11 -0500

As a mathematician, it is difficult to use a system like Axiom,
mostly because it keeps muttering about Types. If you're not
familiar with type theory (most mathematicians aren't) then it
seems pointless and painful.

So Axiom has a steep learning curve.

As a mathematician with an algorithmic approach, it is difficult
to use a system like Axiom, mostly because you have to find
or create "domains" or "packages", understand categories
with their inheritance model, and learn a new language with
a painful compiler always complaining about types.

So Axiom has a steep learning curve.

The Sane version of Axiom requires knowing the mathematics.
It also assumes a background in type theory, inductive logic,
homotopy type theory, ML (meta-language, not machine
learning (yet)), interactive theorem proving, kernels, tactics,
and tacticals. Also assumed is knowledge of specification languages,
Hoare triples, proof techniques, soundness, and completeness.
Oh, and there is a whole new syntax and semantics added to
specify definitions, axioms, and theorems, not to mention whole
libraries of the same.

So Axiom Sane has a steep learning curve.

I've taken 10 courses at CMU and spent the last 4-5 years
learning to read the leading edge literature (also known
as "greek studies", since every paper has pages of greek).

I'm trying to unify computer algebra and proof theory into a
"computational mathematics" framework. I suspect that the only
way this system will ever be useful is after Universities have a
"Computational Mathematics" major course of study and degree.

Creating a new department is harder than creating Axiom Sane
because, you know, ... people.

I think such a department is inevitable given the deep and wide
impact of computers, just not in my lifetime. That's ok. When I
started programming there was no computer science degree.

Somebody has to be the first lemming over the cliff.

Tim

On 1/9/20, Tim Daly <address@hidden> wrote:
> When Axiom Sane is paired with a proof checker (e.g. with Lean)
> there is a certain amount of verification that is involved.
>
> Axiom will provide proofs (presumably validated by Lean) for its
> algorithms. Ideally, when a computation is requested from Lean
> for a GCD, the result as well as a proof of the GCD algorithm is
> returned. Lean can the verify that the proof is valid. But it is
> computationally more efficient if Axiom and Lean use a cryptographic
> hash, such as SHA1. That way the proof doesn't need to be
> 'reproven', only a hash computation over the proof text needs to
> be performed. Hashes are blazingly fast. This allows proofs to be
> exchanged without re-running the proof mechanism. Since a large
> computation request from Lean might involve many algorithms
> there would be considerable overhead to recompute each proof.
> A hash simplifies the issue yet provides proof integrity.
>
> Tim
>
>
> On 1/9/20, Tim Daly <address@hidden> wrote:
>> Provisos.... that is, 'formula SUCH pre/post-conditions'
>>
>> A computer algebra system ought to know and ought to provide
>> information about the domain and range of a resulting formula.
>> I've been pushing this effort since the 1980s (hence the
>> SuchThat domain).
>>
>> It turns out that computing with, carrying, and combining this
>> information is difficult if not impossible in the current system.
>> The information isn't available and isn't computed. In that sense,
>> the original Axiom system is 'showing its age'.
>>
>> In the Sane implementation the information is available. It is
>> part of the specification and part of the proof steps. With a
>> careful design it will be possible to provide provisos for each
>> given result that are carried with the result for use in further
>> computation.
>>
>> This raises interesting questions to be explored. For example,
>> if the formula is defined over an interval, how is the interval
>> arithmetic handled?
>>
>> Exciting research ahead!
>>
>> Tim
>>
>>
>>
>> On 1/3/20, Tim Daly <address@hidden> wrote:
>>> Trusted Kernel... all the way to the metal.
>>>
>>> While building a trusted computer algebra system, the
>>> SANE version of Axiom, I've been looking at questions of
>>> trust at all levels.
>>>
>>> One of the key tenets (the de Bruijn principle) calls for a
>>> trusted kernel through which all other computations must
>>> pass. Coq, Lean, and other systems do this. They base
>>> their kernel  on logic like the Calculus of Construction or
>>> something similar.
>>>
>>> Andrej Bauer has been working on a smaller kernel (a
>>> nucleus) that separates the trust from the logic. The rules
>>> for the logic can be specified as needed but checked by
>>> the nucleus code.
>>>
>>> I've been studying Field Programmable Gate Arrays (FPGA)
>>> that allow you to create your own hardware in a C-like
>>> language (Verilog). It allows you to check the chip you build
>>> all the way down to the transistor states. You can create
>>> things as complex as a whole CPU or as simple as a trusted
>>> nucleus. (youtube: Building a CPU on an FPGA). ACL2 has a
>>> history of verifying hardware logic.
>>>
>>> It appears that, assuming I can understand Bauers
>>> Andromeda system, it would be possible and not that hard
>>> to implement a trusted kernel on an FPGA the size and
>>> form factor of a USB stick.
>>>
>>> Trust "down to the metal".
>>>
>>> Tim
>>>
>>>
>>>
>>> On 12/15/19, Tim Daly <address@hidden> wrote:
>>>> Progress in happening on the new Sane Axiom compiler.
>>>>
>>>> Recently I've been musing about methods to insert axioms
>>>> into categories so they could be inherited like signatures.
>>>> At the moment I've been thinking about adding axioms in
>>>> the same way that signatures are written, adding them to
>>>> the appropriate categories.
>>>>
>>>> But this is an interesting design question.
>>>>
>>>> Axiom already has a mechanism for inheriting signatures
>>>> from categories. That is, we can bet a plus signature from,
>>>> say, the Integer category.
>>>>
>>>> Suppose we follow the same pattern. Currently Axiom
>>>> inherits certain so-called "attributes", such as ApproximateAttribute,
>>>> which implies that the results are only approximate.
>>>>
>>>> We could adapt the same mechnaism to inherit the Transitive
>>>> property by defining it in its own category. In fact, if we
>>>> follow Carette and Farmer's "tiny theories" architecture,
>>>> where each property has its own inheritable category,
>>>> we can "mix and match" the axioms at will.
>>>>
>>>> An "axiom" category would also export a function. This function
>>>> would essentially be a "tactic" used in a proof. It would modify
>>>> the proof step by applying the function to the step.
>>>>
>>>> Theorems would have the same structure.
>>>>
>>>> This allows theorems to be constructed at run time (since
>>>> Axiom supports "First Class Dynamic Types".
>>>>
>>>> In addition, this design can be "pushed down" into the Spad
>>>> language so that Spad statements (e.g. assignment) had
>>>> proof-related properties. A range such as [1..10] would
>>>> provide explicit bounds in a proof "by language definition".
>>>> Defining the logical properties of language statements in
>>>> this way would make it easier to construct proofs since the
>>>> invariants would be partially constructed already.
>>>>
>>>> This design merges the computer algebra inheritance
>>>> structure with the proof of algorithms structure, all under
>>>> the same mechanism.
>>>>
>>>> Tim
>>>>
>>>> On 12/11/19, Tim Daly <address@hidden> wrote:
>>>>> I've been reading Stephen Kell's (Univ of Kent
>>>>> https://www.cs.kent.ac.uk/people/staff/srk21/) on
>>>>> Seven deadly sins of talking about “types”
>>>>> https://www.cs.kent.ac.uk/people/staff/srk21//blog/2014/10/07/
>>>>>
>>>>> He raised an interesting idea toward the end of the essay
>>>>> that type-checking could be done outside the compiler.
>>>>>
>>>>> I can see a way to do this in Axiom's Sane compiler.
>>>>> It would be possible to run a program over the source code
>>>>> to collect the information and write a stand-alone type
>>>>> checker. This "unbundles" type checking and compiling.
>>>>>
>>>>> Taken further I can think of several other kinds of checkers
>>>>> (aka 'linters') that could be unbundled.
>>>>>
>>>>> It is certainly something to explore.
>>>>>
>>>>> Tim
>>>>>
>>>>>
>>>>> On 12/8/19, Tim Daly <address@hidden> wrote:
>>>>>> The Axiom Sane compiler is being "shaped by the hammer
>>>>>> blows of reality", to coin a phrase.
>>>>>>
>>>>>> There are many goals. One of the primary goals is creating a
>>>>>> compiler that can be understood, maintained, and modified.
>>>>>>
>>>>>> So the latest changes involved adding multiple index files.
>>>>>> These are documentation (links to where terms are mentioned
>>>>>> in the text), code (links to the implementation of things),
>>>>>> error (links to where errors are defined), signatures (links to
>>>>>> the signatures of lisp functions), figures (links to figures),
>>>>>> and separate category, domain, and package indexes.
>>>>>>
>>>>>> The tikz package is now used to create "railroad diagrams"
>>>>>> of syntax (ala, the PASCAL report). The implementation of
>>>>>> those diagrams follows immediately. Collectively these will
>>>>>> eventually define at least the syntax of the language. In the
>>>>>> ideal, changing the diagram would change the code but I'm
>>>>>> not that clever.
>>>>>>
>>>>>> Reality shows up with the curent constraint that the
>>>>>> compiler should accept the current Spad language as
>>>>>> closely as possible. Of course, plans are to include new
>>>>>> constructs (e.g. hypothesis, axiom, specification, etc)
>>>>>> but these are being postponed until "syntax complete".
>>>>>>
>>>>>> All parse information is stored in a parse object, which
>>>>>> is a CLOS object (and therefore a Common Lisp type)
>>>>>> Fields within the parse object, e.g. variables are also
>>>>>> CLOS objects (and therefore a Common Lisp type).
>>>>>> It's types all the way down.
>>>>>>
>>>>>> These types are being used as 'signatures' for the
>>>>>> lisp functions. The goal is to be able to type-check the
>>>>>> compiler implementation as well as the Sane language.
>>>>>>
>>>>>> The parser is designed to "wrap around" so that the
>>>>>> user-level output of a parse should be the user-level
>>>>>> input (albeit in a 'canonical" form). This "mirror effect"
>>>>>> should make it easy to see that the parser properly
>>>>>> parsed the user input.
>>>>>>
>>>>>> The parser is "first class" so it will be available at
>>>>>> runtime as a domain allowing Spad code to construct
>>>>>> Spad code.
>>>>>>
>>>>>> One plan, not near implementation, is to "unify" some
>>>>>> CLOS types with the Axiom types (e.g. String). How
>>>>>> this will happen is still in the land of design. This would
>>>>>> "ground" Spad in lisp, making them co-equal.
>>>>>>
>>>>>> Making lisp "co-equal" is a feature, especially as Spad is
>>>>>> really just a domain-specific language in lisp. Lisp
>>>>>> functions (with CLOS types as signatures) would be
>>>>>> avaiable for implementing Spad functions. This not
>>>>>> only improves the efficiency, it would make the
>>>>>> BLAS/LAPACK (see volume 10.5) code "native" to Axiom.
>>>>>> .
>>>>>> On the theory front I plan to attend the Formal Methods
>>>>>> in Mathematics / Lean Together conference, mostly to
>>>>>> know how little I know, especially that I need to know.
>>>>>> http://www.andrew.cmu.edu/user/avigad/meetings/fomm2020/
>>>>>>
>>>>>> Tim
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 11/28/19, Jacques Carette <address@hidden> wrote:
>>>>>>> The underlying technology to use for building such an algebra
>>>>>>> library
>>>>>>> is
>>>>>>> documented in the paper " Building on the Diamonds between Theories:
>>>>>>> Theory Presentation Combinators"
>>>>>>> http://www.cas.mcmaster.ca/~carette/publications/tpcj.pdf [which
>>>>>>> will
>>>>>>> also be on the arxiv by Monday, and has been submitted to a
>>>>>>> journal].
>>>>>>>
>>>>>>> There is a rather full-fledged prototype, very well documented at
>>>>>>> https://alhassy.github.io/next-700-module-systems/prototype/package-former.html
>>>>>>>
>>>>>>> (source at https://github.com/alhassy/next-700-module-systems). It
>>>>>>> is
>>>>>>> literate source.
>>>>>>>
>>>>>>> The old prototype was hard to find - it is now at
>>>>>>> https://github.com/JacquesCarette/MathScheme.
>>>>>>>
>>>>>>> There is also a third prototype in the MMT system, but it does not
>>>>>>> quite
>>>>>>> function properly today, it is under repair.
>>>>>>>
>>>>>>> The paper "A Language Feature to Unbundle Data at Will"
>>>>>>> (https://alhassy.github.io/next-700-module-systems/papers/gpce19_a_language_feature_to_unbundle_data_at_will.pdf)
>>>>>>>
>>>>>>> is also relevant, as it solves a problem with parametrized theories
>>>>>>> (parametrized Categories in Axiom terminology) that all current
>>>>>>> systems
>>>>>>> suffer from.
>>>>>>>
>>>>>>> Jacques
>>>>>>>
>>>>>>> On 2019-11-27 11:47 p.m., Tim Daly wrote:
>>>>>>>> The new Sane compiler is also being tested with the Fricas
>>>>>>>> algebra code. The compiler knows about the language but
>>>>>>>> does not depend on the algebra library (so far). It should be
>>>>>>>> possible, by design, to load different algebra towers.
>>>>>>>>
>>>>>>>> In particular, one idea is to support the "tiny theories"
>>>>>>>> algebra from Carette and Farmer. This would allow much
>>>>>>>> finer grain separation of algebra and axioms.
>>>>>>>>
>>>>>>>> This "flexible algebra" design would allow things like the
>>>>>>>> Lean theorem prover effort in a more natural style.
>>>>>>>>
>>>>>>>> Tim
>>>>>>>>
>>>>>>>>
>>>>>>>> On 11/26/19, Tim Daly <address@hidden> wrote:
>>>>>>>>> The current design and code base (in bookvol15) supports
>>>>>>>>> multiple back ends. One will clearly be a common lisp.
>>>>>>>>>
>>>>>>>>> Another possible design choice is to target the GNU
>>>>>>>>> GCC intermediate representation, making Axiom "just
>>>>>>>>> another front-end language" supported by GCC.
>>>>>>>>>
>>>>>>>>> The current intermediate representation does not (yet)
>>>>>>>>> make any decision about the runtime implementation.
>>>>>>>>>
>>>>>>>>> Tim
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 11/26/19, Tim Daly <address@hidden> wrote:
>>>>>>>>>> Jason Gross and Adam Chlipala ("Parsing Parses") developed
>>>>>>>>>> a dependently typed general parser for context free grammar
>>>>>>>>>> in Coq.
>>>>>>>>>>
>>>>>>>>>> They used the parser to prove its own completeness.
>>>>>>>>>>
>>>>>>>>>> Unfortunately Spad is not a context-free grammar.
>>>>>>>>>> But it is an intersting thought exercise to consider
>>>>>>>>>> an "Axiom on Coq" implementation.
>>>>>>>>>>
>>>>>>>>>> Tim
>>>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]