axiom-developer
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Axiom-developer] Re: Literate programming


From: root
Subject: [Axiom-developer] Re: Literate programming
Date: Tue, 16 May 2006 21:21:46 -0400

(note that this part of my reply is copied to the axiom mailing list)

> Would you care to share some insights as to why you are thinking about
> redoing noweb in lisp?

five reasons. 
(well "reasons" might be a bit strong except in a religious sense)


first, noweb is slow. my current document (for work) has about 60k
bytes so far.  it takes about 20 seconds to make a change, save it,
and then run the makefile to extract the code, run the test cases, and
update the dvi file.

of the 20 seconds about 18 are spent in noweb. it is not scaling.

noweb uses the old byte-at-a-time model with pipes, awk, sed, etc.

in lisp i'd use read-sequence which can read the whole file in subsecond
time and the lisp processing won't take much longer. you could use mmap
in C but that won't work with the C/sed/awk pipe scripts.

axiom source is slowly dissolving into fully literate documents 
and these documents are getting larger and more complex. we MUST scale.



second, if lisp could manipulate literate documents directly it would
greatly increase the ways we could integrate literate documents into
the interpreter/compiler/browser/graphics. it would be possible to
directly (read) from a chunk in a file so that program and data files
don't need to be notangled. which eliminates the need for notangle and
makes literate documents fundamental.



third, axiom is mostly lisp. noweb adds requirements for 
sed/C/[gawk|nawk|awk]. it's been argued that these exist already
but that's hardly a reason for continuing. a uniform implementation
language increases function (see above) and reduces developer knowledge 
requirements. if i can just type (read (find-chunk "chunkname")) it is
much easier and more useful than starting a separate process to run a
C/sed/awk function and then opening a stream to read the result.



fourth, since all of my literate work uses latex it makes sense to 
limit the noweb functionality to be pure latex. so instead of
defining a chunk as

<<anything>>=
@

we would write 

\begin{chunk}{anything}
\end{chunk}

this would eliminate the noweave step completely. all of the
power of latex is available. it would be possible to make 
the chunk-environment sensitive to things like fonts or other 
latex commands like bold-font, arrays, etc. chunks need not
be raw quoted code but could include hyperlinks. these could be
eliminated in the notangle process.



fifth, i can't leave "good enough" alone. i see axiom growing by
a factor of 10 or more and i'm constantly gnawing on the problem
of scaling and integration. unifying chunks and latex with lisp
gives me a whole new realm of ideas for unifying the idea of 
literate documents and computational math. 

i want the distinction to disappear. i want computational math to just
be published as literate documents containing theory and code where
the running code is considered the "proof". i want the standard of
academic interchange to require running code much as standard
mathematics requires a proof. i want a literate journal that referees
literate computational mathematics papers. (for academics, programs
are a loss at the moment because they are not considered for tenure)

noweb is a wart on the whole process. it exposes mechanism.
you should be able to just "run" a computational mathematics paper.
no mechanism, no machinery. 

it should "just work".

t








reply via email to

[Prev in Thread] Current Thread [Next in Thread]