axiom-developer
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Axiom-developer] size issue with noweb


From: root
Subject: Re: [Axiom-developer] size issue with noweb
Date: Fri, 21 Apr 2006 15:24:55 -0400

> > until this massive project settles down i don't have the time to
> > undertake the project. at some point the little slices of noweb
> > time are eating away at my code-compile-check (CCC) loop. since
> > this is the main workday loop every second counts. at some point,
> > probably when my CCC loop hits 1 minute per change, it makes
> > sense to stop real work, code the lisp version of noweb, and then
> > continue. since my productivity is inversely related to the CCC
> > time i need my CCC loop to be instantaneous.

> Surely this way of programming must be extremely foreign to a
> lisp programmer! Working with one large document and repeatedly
> reprocessing the whole thing and recreating the program each
> time you want to test something is certainly not the way most
> people use lisp. Even when programming in less "dynamic"
> languages like Python or even C, one usually modifies only a
> small part of the program and does the minimum amount of
> *incremental* work required to test it.

Multiple files per program came about historically because
computers were limited in what they could process. Unix on
my pdp 11/40 couldn't handle large files. In fact, the unix
editor on the pdp 11/40 would die if a file exceeded 1/2
of available core memory (minus the editor/opsys/etc)
which effectively limited my files to about 8K bytes including
comments. 

These limitations generated linker overlays (thankfully 
unused), segment addressing (still bothering us in the x86),
#include noise, preprocessors, obscure library structures,
byte-by-byte file processing, sand-grain sized source files,
linker file formats, makefiles, and all manner of now-useless
tools and technologies.

If you work on a large program you need to be able to 
keep the organization in your head. If you use 1K C files as
your tool you end up grepping all over the place to find the
right include file or the C source file. Worse yet is the 
include hell (which is the key reason why axiom is not yet
running on a mac) where each opsys changes the order of these
tiny include files.

My program is currently over 50k lines and 1200 pages and
I don't feel any strain at all while writing it. I estimate
it will be finished at about 500k lines and 12k pages. If
I did all that in the C or Java style of tiny files I'd have
a dozen directories (really just virtual chapter/sections)
and hundreds of files with associated makefiles, include
files, test cases, and, oh yes, maybe some documentation.

My head is a very small place and I need to optimize it.

My monolithic file style has evolved over the years into an
extremely productive environment. At any time I know that my
program has up to date documentation, test cases, and code.
I know it works because I just completely rebuilt it a minute
ago and any change I've made to the system happened in the
last minute. I can send you the single file and know that
it works and is fully documented and tested. And I can do
this at any moment of the day. Since I rarely change more
than 10 lines before completely rebuilding, testing, and
checking the document output I know what just broke (since
the test case is in there), when and how I broke it, and the
broken source is still in front of me.

Furthermore, since the file is a document I'm constantly
reminded that I'm writing for people, not the machine. The
combination of emacs, a shell buffer, latex, and xdvi keeps
me aware of the structure of the problem as well as the
structure of the program. There is nothing quite like a
Chapter/Section organization to help organize a program
as well as my thinking. The literate programming tecnology
allows me to reorganize the document and the program at will.
Thus I can break out a pile of code into a new section if it
becomes rich enough in function to warrent its own place.

Tools affect the way you think about a problem and even
what you can think about. We need to stop living with the
historical limitations of tools from my childhood and start
working with real, industrial strength tools. We'll shortly
have a 1THz processor with 1TByte of memory. Why limit our
thinking to 1960s technology?

t




reply via email to

[Prev in Thread] Current Thread [Next in Thread]