[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Axiom-developer] Re: [Axiom-math] documentation and the crystal

From: root
Subject: [Axiom-developer] Re: [Axiom-math] documentation and the crystal
Date: Wed, 31 Dec 2003 03:45:47 -0500

>Usually, people have a top-down approach: they modelize a system
>abstractly (using more or less formal notations as UML or SDL) and then
>refine them through the actual code. But, in the case of Axiom, we need
>the reverse. We need to start from concrete objects (files, lines of
>source code) and add semantics to climb levels of abstraction. Of
>course, you follow different ladders, in the sense that understanding
>the compiler or the algebra would need different information and is
>structured differently, thus the different crystal facets of Tim.
>More concretely, I would propose the following approach:
> 1. start from parsers for the src/ directory. Parse directory structure
>    and each file, categorize them (boot, lisp, spad, ...) and construct
>    basic abstractions (list of lisp and boot functions; list of spad
>    categories, domains and operators; ...)

If you look at the int/algebra directory you'll see directories called
NRLIBs (e.g. DHMATRIX.NRLIB) which contain several files. These files
are output from the spad compiler. The databases are built from information
in these files. We can enhance the spad compiler to include additional
information in forms that can be written into databases or semantic
networks or whatever we like. The current databases (*.daase) files are
random access files. There is a C program called asq which knows how to
read these databases and answer queries. (src/etc/asq.c)

> 2. from information extracted in step 1, construct one or several
>    representation (knowledge graph for example) with the found
>    semantics (name and body of a function for example), probably using
>    a standard technology as W3C semantic web

I'll have to look at Bill's references to see what technology they are
using for the semantic web. There are several dozen ways of doing this
and almost all of them are not "well-founded" in any mathematical sense.
KREP is well founded. That is, when you build a semantic network one thing
you want to do is put a concept into the network "in the appropriate place".
Most of these systems have heuristics for doing this. KREP has a logically
sound, well-defined predicate called subsumption for comparing two concepts
and deciding if one subsumes the other. In KREP a new concept is put in
the semantic network by starting at the top and using the subsumption
predicate to push the concept down until it hits "the right place".

Semantic networks based on "is-a" links, or "kind-of" links and other
such random ideas are badly formed and suffer from a variety of logical

> 3. the "Axiom explorer" (Tim, Bill or I) is interested in a specific
>    "crystal facet" (e.g. the compiler). He builds another tool
>    (e.g. call-graph analyzer for lisp code) which in turn is used to
>    construct a new knowledge graph, concretized in additional
>    information in the W3C semantic web.
> 4. Within the abstraction level build in step 4 (or 2), the "Axiom
>    explorer" adds its own knowldege (e.g. this set of functions is used
>    to parse Spad code, this other set is used for type analysis, ...)
>    to the semantic web.
> 5. using previous abstraction levels and probably building a new tool,
>    the "Axiom explorer" iterates, climbing abstraction levels, until he
>    reaches his own goal.
>Repeat steps 1 to 5 with enough people to cover Axiom from A (Algebra)
>to Z (zerodim). 
>Ok. I might be a bit optimistic and what I have said might appear more
>than abstract, but this is the current state of might thoughts. :) 
>Even if you do not like above ideas, I think following "principles" are
>needed for a documentation system for Axiom (principles already
>formulated by Tim is his first email):
> o separation principle: in engineering, people separate a complex issue
>   in _independant_ sub-issues to be able to understand them and solve
>   them independently. We probably need to deconstruct Axiom in
>   independent (or a least loosely connected) sub-systems to be able to
>   understand them (i.e. the different crystal facets of Tim);

I stole this principle from the unix world. use simple tools (asq, makefile)
and gang them together to make more complex tools. Thus I see asq-style
programs triggered by clicking on a facet to answer a question. Some facets
might be backed by things like texmacs, xdvi, or latex.

> o "build on giant shoulders" principle: we need a way to reuse
>   knowledge from other "crystal facets". For example, I would use Tim
>   knowledge of the internals of Axiom to understand how the compiler
>   compiles a given portion of the algebra;

5'8" hardly qualifies as a giant :-) 

> o automation principle: Axiom is too big to add information manually on
>   each function, each object, etc. We need tools to annotate a set of
>   objects given a selection criteria (e.g. all operators in this Spad
>   domain). 

I'd like to automate as much as possible but the automation curve
gets steep in a hurry. At least for the algebra code we already have
tools in place. If we can get to where we can draw a lattice in the
next 6 months I should have the data (both hand and machine generated)
for the algebra.

>New year wish: I'll try to write and "show you the code" for above
>ideas. :) My own todo for this subproject of Axiom would be:
> - learn more about W3C Semantic Web (thank you Bill for the pointer)
> - find or write tools to manipulate Semantic Web (it might be Emacs
>   with a proper mode or a more elaborated graphical tool)
> - apply above approach, starting from directory structure in src/
>   directory of Axiom.
> - from this first experiment, think about what would need to be
>   "standardized", like common dictionnary or vocabulary, etc. Beyond
>   usual technological issues, I think this point is one fo the harder
>   point. How to build a set of knowledge that will be still useful in
>   30 years from now?

>By the way, does anybody knows about a library to *draw* and manipulate
>arbitary graphs (in Common Lisp, C++, ML, ...) in a user interface? I
>know about DOT or VCG but they dot not match my needs: I would like to
>draw a graph, know when the user click on a graph node or edge and react
>accordingly. Does anybody know where I could find such an already made
>tool? Any knowledge of a browser for the W3C semantic web?

There used to be a tool that would allow you to move around in a graph.
It was used to give a clever desktop. As you moved over portions of the
desktop the things nearest the mouse pointer got larger and were centered.
I can't remember what it was called but I thing M$ used a similar idea
in XP with the toolbar.

I'm going to try to get ahold of a tool that runs in common lisp and
gives access to the gtk bindings. We can use it to prototype some basic
functionality (like opening an empty window, drawing a graph in the window,
embedding texmacs in the window, etc. The developer will be back on the
5th and I'll look into it further at that time.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]