[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Axiom-developer] Desired functionality from noweb

From: C Y
Subject: Re: [Axiom-developer] Desired functionality from noweb
Date: Fri, 4 May 2007 05:23:04 -0700 (PDT)

--- Ralf Hemmecke <address@hidden> wrote:

> Hi Cliff,
> On 05/04/2007 02:48 AM, C Y wrote:
> > As I'm getting deeper into learning what noweb is capable of, I
> > would like to ask the list if anyone is familiar with the various
> > options noweb provides on the LaTeX side of the equation and how
> > many of them people like to use.

> noweave -n -index
> -n      does not produce a latex wrapper (ALLPROSE provides a
> wrapper.)

OK.  That should be doable by locating the \begin{document} tag,
although we lose any special \usepackage commands if we do...

> -index  produces index information of identifiers


> Although deprecated by Norman Ramsey, I use the
> <<chunkname>>=
> ...
> @ %def Identifier1 Identifier2 ...
> a lot to get hyperlinks inside code chunks. (For .as files, this %def
> information is autogenerated by ALLPROSE scripts.)

In the case of Lisp, I"m not quite sure how to do this in general. 
MOST of that information can probably be generated, but Lisp macros
would cause some difficulties in that department.

> You also see
> NOTANGLE, used in chunks 362, 363, 376, 435, 437, 468b, and 548.
> I find this information sometimes quite useful to find out where some
> identifiers are used. And look a bit closer at the last link (548).
> NOTANGLE is defined in Makefile.nw. The link leads to a place that is
> in test/Makefile.nw (see top of .html page).

You are piping the notangle information into a script that searches for
the strings of the identifiers you are wanting to get information on?

A lot of this aspect of noweb reminds me of the XREF utility in Lisp -
specifically the "who-calls" option, IIRC.  I think doing all of this
in Lisp could prove a bit non-trivial, at least to do it in a robust

> My way to use noweave is:
>    1. concatenate all .nw files
>    2. run noweave -n -index on the concatenated file
>    3. split the output into .tex files corresponding to the
>       original .nw files
>    4. Use a wrapper (which looks approximately like)
>       \documentclass{article}
>       \usepackage{allprose}
>       \begin{document}
>       \inputAllTexFiles
>       \end{document}
>    5. latex/pdflatex/htlatex that wrapper.

OK.  So when you write .nw files originally they contain no header
information at all.  I think this gets back to the original discussion
on how to handle different pamphlets needing different style files
(sistyle for units, for example.)  I'll need to ponder this one some
more and see if I can find the package or two designed to handle such

> You find the full story on the website

Out of curiosity Ralf, have you ever benchmarked ALLPROSE?  How long
would it take do you think to process something really large?

> Note that I don't think that everything should go into one big
> pamphlet file. I rather like to edit several files which finally
> produce ONE document.

I tend to think of it as one pamphlet = 1 "concept", and then pamphlets
would be bundled like conference proceedings to make larger volumes. 
It's the combining that makes it interesting.

> Using "inverse search" 
> (, the file boundaries
> are pretty much blurred. I never type filenames for loading files. I
> simply click into the .dvi file.
> Of course, I have added a number of TeX commands to make use of
> identifiers defined via the %def syntax. They can be used via
>    \useterm{identifier}
> inside ordinary non-chunk text and link to the defining code chunk.


> I don't use the [[..]] syntax at all and would actually suggest not
> to use it. [[...]] is no proper tagging, but rather like \verb.
> Inside my text I rather use something like
> \usemaketarget{...}
> \definetexcommand
> \usetexcommand
> \defineterm
> \useterm
> ...
> See also

I'll need to study that one some more.

> I think also the little arrows at the top of each code junk are nice,
> in particular if a code junk continues at some other place. You then
> see a
>    +\equiv
> at the top of the code junk and can click through the code chunks
> that belong together.

In essence, links that move the reader through the document in the
order in which the machine would see the code?  That's not a bad idea. 

> And if you look at the index you will find red and blue entries. I
> have added a TeX command to modify the noweb.sty so that definitions
> are shown in red in the index. (Style of course adjustable)

I'm still a bit unsure of the viability of the language aware part of
the noweave process when it comes to Lisp, and done right it will
greatly increase the challenge of programming all of this.  I think the
right approach here will be to start small and scale up. 

> What I don't like at all with noweb is that one gets a \par after the
> end of a code chunk if the @ is not followed by a space and %. In
> noweb the Text can be continued immediately after the ending @\space,

> but for me that looks terrible. I want to see in the latex source
> were a code junk ends. A single @ doesn't catch the eye so quickly.

I always assumed that the working literate programming style wouldn't
have code actually inline with text - are you saying you DO want to use
that style and don't want the \par command?

> Don't worry too much about the .sty file. If you generate LaTeX by a 
> program, you can pretty much rewrite the text into simple TeX
> commands so that .sty file programming would be an easy task.


> I have chosen that option for \adname 
> It is translates into \adinternalusename 
> (
> by 
> the script tools/ 
> ( Otherwise I 
> would have had hard times to deal with \catcode and such to allow %
> and 
> friend to appear inside the argument of \adname. That command would
> be 
> used as
>    \adname{-: % -> %}
>    \adname{-: (%, %) -> %}
> and lead to different hyperlinks. And if such a function is also
> defined 
> in another type than the current one, it is also possible to say
>    \adname[AdditiveGroup]{-: (%, %) -> %}
> in order to specify exactly where the link should point to. (OK, but 
> that is something not really connected to noweb.

Makes sense, I think - I'll stare at it some more.

> > dhmatrix doesn't appear to use too many of the fancier noweb
> > options and with hyperlinking to link chunks together I'm not
> > sure if some of the features (e.g. the labels identifying page
> > number and a,b,c etc.) add enough to be worth supporting them.
> > Does anyone have any opinions
> > about this?
> dhmatrix is a paper in traditional form. Nowadays, I would
> additionally like to see hyperlinks. 

Maybe we need to define the term hyperlinks - some are simple (like the
ones in my original example, and adding chunk names to an index
shouldn't be TOO hard) and some are dependent on very advanced code
recognition abilities (like identifying where functions are defined and
used, and indexing them.)  I would prefer to start simple and not mess
with understanding the code at first, perhaps working up to that later.

> Time is a precious thing so help the reader to 
> quickly find the thing s/he is looking for. Going back and forth to
> the index manually (like in printed form) is old technology.

Absolutely agree, but there are degrees which will result in rapidly
increasing complexity.  Hyperlinks for chunks and chunk references are
simple, hyperlinks WITHIN code inside chunks are not.

> > My thought at this time is with hyperref to link chunks to their
> > definitions (e.g. treat them like normal LaTeX hyperrefs) and
> perhaps
> > automatically generating index entries for chunks that should be
> enough
> > to cover what we would require.  (perhaps an automatically inserted
> > "used by" note would be handy to identify the higher level chunk
> into
> > which a sub-chunk is inserted.)
> Eventually, (in particular for the Algebra) I would like to see links
> to  where an identifier is exactly used. In order to generated the
> correct hyperlink, it would be necessary to actually compile the
> spad/aldor  program so that it becomes clear what an identifier like
> "foo" actually means in a certain context.

OK.  That's FAR beyond where I'm at right now - I'm basically looking
to generate some valid LaTeX that may have a couple of useful features.
 Code awareness is a big step beyond that, and what you're talking
about here may need compiler support to retain the information.

> Note that I could locally have defined
> macro foo(x, y) == a + b;
> so "foo" should better point to the corresponding + in the source (or
> at least to its macro definition.

Very useful, I agree - also very difficult.  I would be interested to
know how Tim was planning to deal with these issues - perhaps there are
features in Lisp I don't know about that would make this possible.  I'm
sure heuristics could get fairly close to the correct answer,
especially with simple code, but I'm wary of that for more complex

> As I said a big wish is semantic hyperlinks. (As I understood
> correctly, Tim already referred to that option when he said that
> in lisp all the information about symbols is available.)

Ah, by semantic hyperlinks you mean hyperlink generation based on
source code structure and contents?  OK, that's way beyond where I am
right now - I'm still dealing only with chunk structure.  I like what
you are suggesting, and perhaps it should be the default, but putting
THAT level of machinery into Lisp is more than I'll be able to tackle
right now.  I was figuring to achieve the dhmatrix level + hyperlinks
for chunk references and other standard hyperref features, and as we
get more basic features defined we can build on them.

Just as a point of possible interest, there does exist this system: which may have some features worth
studying when it comes to dealing with lisp code.  Unfortunately it's
GPL, so we probably wouldn't want to use it directly.  (I don't think
we could anyway, probably, but I believe it has some "who-calls"
scanning abilities that would at least be a useful starting point.)

Perhaps we could approach it in this fashion - have the scrips needed
to generate your advanced output be the default, and if testing for
needed machinery fails fall back onto the simpler Lisp+vanilla LaTeX
solution.  Over time, we could migrate features into the Lisp solution
until we can reproduce everything we need.


Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 

reply via email to

[Prev in Thread] Current Thread [Next in Thread]