axiom-developer
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Axiom-developer] [build-improvements] Requests for discussion


From: Ralf Hemmecke
Subject: Re: [Axiom-developer] [build-improvements] Requests for discussion
Date: Thu, 03 Aug 2006 20:33:55 +0200
User-agent: Thunderbird 1.5.0.5 (X11/20060719)

On 08/03/2006 04:10 PM, root wrote:
I guess the literate idea even says that it does not matter how a file is called. It is most important that you write a paper from which you can generate all the code (even different files from one pamphlet source). That sounds nice, but in some sense I find that very difficult to maintain. For ALLPROSE I set the convention every file is a .nw file.

I suggest to adopt that convention for axiom (replace .nw by .pamphlet).

Ralf,

In fact the system direction is the exact opposite, at least the parts
I touch. C (and, even more egregious, Java) adopt the file-per-function
kinds of idea.

You won't believe, but I agree. The literate idea contradicts Java's one-file-per-class.

But what I think is important is, not just to write HUGE files that nobody can manage anymore, but to add a bit of structure to it. Nowadays we could write a whole book into just one latex file. But I bet, most people structure that via several files to keep things manageable (in the head not because of the file system).

We should also note that we are not yet at a point where we can really start to work on the mathematics that we all love to do. We are struggeling with the building process. So my suggestion/convention is just to keep the build simple. If you come up with a better one, welcome.

I think any convention is better then the mess we currently have. What I have seen in the Axiom build process is one-pamphlet-per-Makefile. That is very much JAVA-like. One would have ONE document that describes the whole build process, that is clear. But that document is a generated one (dvi/pdf/...) but not necessarily just ONE pamphlet.

If you have the dvi, click on it and your editor jumps directly to the appropriate source, you would not even care how the file is called. But if you impose a certain structure on the files (like the Makefile.am.pamphlet structure that Gaby is going to create), the whole build process can be described a lot easier, since it relies on the standard GNU Autotools. If you put all the various Makefile.am files into just ONE big pamphlet, you even need a description (and some shell script) that extracts all those different files from your .pamphlet. Note that you would have to call notangle several times.

If everyone adopts his own conventions of what files are in the pamphlet, that basically means that everyone provides his own (certainly non-standard) scripts to transform the pamphlet into actual source code. I don't think I would call that manageable, even if the pamphlet describes what it does. Conventions/Rules are the way to go.

One rule we already have: Everything should be a pamphlet.
Next rule should be that there has to be a certain structure of the pamphlet so that a number of the pamphlets can be used to produce one document (a book if you like) that describes the build, another set of pamphlets (that need not be disjoint from the first one) that describes the interpreter etc. (But the "certain" above is still unclear, at least I want that there is no \documentclass, \begin{document}, \end{document} anymore.--The \usepackage is a problem, I know and have not yet a good solution.)

Indeed we are forced to use "IDE"s because the tiny files have
> overwhelmed our ability to cope.

If you think that IDE's are just for that purpose, I think you missed something. You get more than that. For example, if your mouse moves over a function, you see its API description. You could click and jump to the definition of the function. Right, that sounds like programming, but even if we write pamphlets we finally have to write some code so if your IDE would bring you immediately to the section where the function that you want to use is described/defined, that would be convenient and make your work more productive. It is just another workflow you have to become accustomed to.

And somehow you have to turn all the codepieces into an actual source code of a certain programming language. Pamphlets are nice, but it is a bit hard to debug the code that comes out. You know, we are all not perfect, so debugging is still an issue.

The #line directive is a hack to allow a runtime to figure out which
grain-of-sand-file contains this particular function. It is also a
legacy of history. I am building a program for work that lives in one
literate file, has 30,000 lines of Lisp code, and has 2000+ pages of
final pdf documentation (so far) and 7000 test cases. It is never
ambiguous which file contains the failing function.

OK, an example: My Aldor program tells me that there is an error in
the + function. How long would it take you to find the right function definition without a compiler telling you a line number, if you have 100 definitions of + in you 10,000 lines pamphlet? Even having just 10 definitions of +. It's a waste of time if I am looking at the wrong place in 90% of the cases.

Computer programs have nothing to do with their file storage but
we have linked these ideas and suffered for it. Suppose we follow
the past and make Axiom "include" files which separate out the
signature information in a domain or category into separate files.
Then we could add an "include" statement. This way lies madness.

If I use a category like "Ring" I am surely NOT going to "include" a definition of "Ring" into my code. It is completely enough for me if the "Ring" that appears in my code+description is a hyperlink and leads me to the right place in the generated DVI/PDF/HTML. Especially if you think of the web as a big document, why would you want to include and reproduce all existing things again and again?


Consider what happens if you scale Axiom by a factor of 100 in
the next 30 years onto a petamachine. You end up with 110,000
domains and categories with roughly 1,100,000 functions. I don't
want 110K files. I want 110 books.

Yes. But if your editor would not even tell you how many different files your document consists of, why would you care whether you have 110K or 220K or just 110 files? We don't yet have the right tools to abstract from the file system.

We need to lift our eyes back to the humans, away from the technology.
Communicate, don't program.

I support this idea. But somebody has to provide the underlying technology, right? And Axiom partly is or will become such a system. You don't want to start from the very beginning, but you want to draw a line. You don't explain how a computer is built if you want to speak about permutation groups.

And the "book" idea is good, but it is not perfect. You have already said that you want to write books for different people, look from a different perspective onto the same data. So wouldn't it be nice to have all these data (= some information) split into atoms that could be put together to new molecules (=books)? Read about the LEO system, they basically have such an idea of different view onto the same data.

And actually, if we used our Wiki more extensively, also that could provide different views onto Axiom. I think, even in 20 years the 30 years horizon is still that far away.

Ralf





reply via email to

[Prev in Thread] Current Thread [Next in Thread]