commit-gnue
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

r5249 - in trunk: gnue-appserver gnue-appserver/src gnue-common gnue-com


From: jcater
Subject: r5249 - in trunk: gnue-appserver gnue-appserver/src gnue-common gnue-common/src gnue-dbtools/src gnue-forms gnue-forms/src gnue-integrator/src gnue-navigator gnue-navigator/src gnue-pos/src gnue-reports gnue-reports/src www/releases www/releases/release-files www/utils www/utils/helpers www/utils/helpers/docutils www/utils/helpers/docutils/docutils www/utils/helpers/docutils/docutils/languages www/utils/helpers/docutils/docutils/parsers www/utils/helpers/docutils/docutils/parsers/rst www/utils/helpers/docutils/docutils/parsers/rst/directives www/utils/helpers/docutils/docutils/parsers/rst/languages www/utils/helpers/docutils/docutils/readers www/utils/helpers/docutils/docutils/readers/python www/utils/helpers/docutils/docutils/transforms www/utils/helpers/docutils/docutils/writers www/web/developers www/web/images www/web/packages www/web/project www/web/shared www/web/tools www/web/tools/forms
Date: Sun, 7 Mar 2004 00:27:49 -0600 (CST)

Author: jcater
Date: 2004-03-07 00:27:44 -0600 (Sun, 07 Mar 2004)
New Revision: 5249

Added:
   trunk/www/releases/release-files/
   trunk/www/releases/release-files/README
   trunk/www/utils/helpers/docutils/
   trunk/www/utils/helpers/docutils/COPYING.txt
   trunk/www/utils/helpers/docutils/FAQ.txt
   trunk/www/utils/helpers/docutils/HISTORY.txt
   trunk/www/utils/helpers/docutils/README.gnue
   trunk/www/utils/helpers/docutils/README.txt
   trunk/www/utils/helpers/docutils/__init__.py
   trunk/www/utils/helpers/docutils/docutils/
   trunk/www/utils/helpers/docutils/docutils/__init__.py
   trunk/www/utils/helpers/docutils/docutils/core.py
   trunk/www/utils/helpers/docutils/docutils/frontend.py
   trunk/www/utils/helpers/docutils/docutils/io.py
   trunk/www/utils/helpers/docutils/docutils/languages/
   trunk/www/utils/helpers/docutils/docutils/languages/__init__.py
   trunk/www/utils/helpers/docutils/docutils/languages/de.py
   trunk/www/utils/helpers/docutils/docutils/languages/en.py
   trunk/www/utils/helpers/docutils/docutils/languages/es.py
   trunk/www/utils/helpers/docutils/docutils/languages/fr.py
   trunk/www/utils/helpers/docutils/docutils/languages/it.py
   trunk/www/utils/helpers/docutils/docutils/languages/sk.py
   trunk/www/utils/helpers/docutils/docutils/languages/sv.py
   trunk/www/utils/helpers/docutils/docutils/nodes.py
   trunk/www/utils/helpers/docutils/docutils/parsers/
   trunk/www/utils/helpers/docutils/docutils/parsers/__init__.py
   trunk/www/utils/helpers/docutils/docutils/parsers/rst/
   trunk/www/utils/helpers/docutils/docutils/parsers/rst/__init__.py
   trunk/www/utils/helpers/docutils/docutils/parsers/rst/directives/
   trunk/www/utils/helpers/docutils/docutils/parsers/rst/directives/__init__.py
   
trunk/www/utils/helpers/docutils/docutils/parsers/rst/directives/admonitions.py
   trunk/www/utils/helpers/docutils/docutils/parsers/rst/directives/body.py
   trunk/www/utils/helpers/docutils/docutils/parsers/rst/directives/html.py
   trunk/www/utils/helpers/docutils/docutils/parsers/rst/directives/images.py
   trunk/www/utils/helpers/docutils/docutils/parsers/rst/directives/misc.py
   trunk/www/utils/helpers/docutils/docutils/parsers/rst/directives/parts.py
   
trunk/www/utils/helpers/docutils/docutils/parsers/rst/directives/references.py
   trunk/www/utils/helpers/docutils/docutils/parsers/rst/languages/
   trunk/www/utils/helpers/docutils/docutils/parsers/rst/languages/__init__.py
   trunk/www/utils/helpers/docutils/docutils/parsers/rst/languages/de.py
   trunk/www/utils/helpers/docutils/docutils/parsers/rst/languages/en.py
   trunk/www/utils/helpers/docutils/docutils/parsers/rst/languages/es.py
   trunk/www/utils/helpers/docutils/docutils/parsers/rst/languages/fr.py
   trunk/www/utils/helpers/docutils/docutils/parsers/rst/languages/it.py
   trunk/www/utils/helpers/docutils/docutils/parsers/rst/languages/sk.py
   trunk/www/utils/helpers/docutils/docutils/parsers/rst/languages/sv.py
   trunk/www/utils/helpers/docutils/docutils/parsers/rst/states.py
   trunk/www/utils/helpers/docutils/docutils/parsers/rst/tableparser.py
   trunk/www/utils/helpers/docutils/docutils/readers/
   trunk/www/utils/helpers/docutils/docutils/readers/__init__.py
   trunk/www/utils/helpers/docutils/docutils/readers/pep.py
   trunk/www/utils/helpers/docutils/docutils/readers/python/
   trunk/www/utils/helpers/docutils/docutils/readers/python/__init__.py
   trunk/www/utils/helpers/docutils/docutils/readers/python/moduleparser.py
   trunk/www/utils/helpers/docutils/docutils/readers/standalone.py
   trunk/www/utils/helpers/docutils/docutils/statemachine.py
   trunk/www/utils/helpers/docutils/docutils/transforms/
   trunk/www/utils/helpers/docutils/docutils/transforms/__init__.py
   trunk/www/utils/helpers/docutils/docutils/transforms/components.py
   trunk/www/utils/helpers/docutils/docutils/transforms/frontmatter.py
   trunk/www/utils/helpers/docutils/docutils/transforms/misc.py
   trunk/www/utils/helpers/docutils/docutils/transforms/parts.py
   trunk/www/utils/helpers/docutils/docutils/transforms/peps.py
   trunk/www/utils/helpers/docutils/docutils/transforms/references.py
   trunk/www/utils/helpers/docutils/docutils/transforms/universal.py
   trunk/www/utils/helpers/docutils/docutils/urischemes.py
   trunk/www/utils/helpers/docutils/docutils/utils.py
   trunk/www/utils/helpers/docutils/docutils/writers/
   trunk/www/utils/helpers/docutils/docutils/writers/__init__.py
   trunk/www/utils/helpers/docutils/docutils/writers/docutils_xml.py
   trunk/www/utils/helpers/docutils/docutils/writers/html4css1.py
   trunk/www/utils/helpers/docutils/docutils/writers/latex2e.py
   trunk/www/utils/helpers/docutils/docutils/writers/pep_html.py
   trunk/www/utils/helpers/docutils/docutils/writers/pseudoxml.py
   trunk/www/utils/helpers/docutils/optparse.py
   trunk/www/utils/helpers/docutils/roman.py
   trunk/www/utils/helpers/docutils/textwrap.py
   trunk/www/web/developers/section.css
   trunk/www/web/images/b_dev.png
   trunk/www/web/images/b_main.png
   trunk/www/web/images/b_packages.png
   trunk/www/web/images/b_tools.png
   trunk/www/web/images/b_wiki.png
   trunk/www/web/packages/_module_menu.php
   trunk/www/web/packages/section.css
   trunk/www/web/project/index.php
   trunk/www/web/project/section.css
   trunk/www/web/shared/_footer.php
   trunk/www/web/shared/_header.php
   trunk/www/web/shared/_menu.php
   trunk/www/web/shared/base.css
   trunk/www/web/tools/forms/
   trunk/www/web/tools/forms/product.png
   trunk/www/web/tools/section.css
Modified:
   trunk/gnue-appserver/INSTALL
   trunk/gnue-appserver/README
   trunk/gnue-appserver/src/__init__.py
   trunk/gnue-common/README
   trunk/gnue-common/src/__init__.py
   trunk/gnue-dbtools/src/__init__.py
   trunk/gnue-forms/FAQ
   trunk/gnue-forms/INSTALL
   trunk/gnue-forms/README
   trunk/gnue-forms/src/__init__.py
   trunk/gnue-integrator/src/__init__.py
   trunk/gnue-navigator/INSTALL
   trunk/gnue-navigator/src/__init__.py
   trunk/gnue-pos/src/__init__.py
   trunk/gnue-reports/README
   trunk/gnue-reports/src/__init__.py
   trunk/www/utils/create-release-announcements
   trunk/www/utils/helpers/files.py
   trunk/www/utils/helpers/tools.py
Log:
started on web automation scripts

Modified: trunk/gnue-appserver/INSTALL
===================================================================
--- trunk/gnue-appserver/INSTALL        2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/gnue-appserver/INSTALL        2004-03-07 06:27:44 UTC (rev 5249)
@@ -21,13 +21,13 @@
   - pygresql (also possible for PostgreSQL) [python-pygresql]
   - python-mysqldb (for MySQL)
   you can find more information about possible database backends in the file
-  README.Databases, distributed with gnue-common.
+  README.Databases, distributed with GNUe Common.
 
 * at least one of the following RPC libraries:
   - py-xmlrpc [python-xmlrpc]
   - Pythonware xmlrpc (included in Python starting with 2.2)
 
-* gnue-common 0.5.0 or greater [gnue-common]
+* GNUe Common 0.5.0 or greater [gnue-common]
 
 To build the documentation, you need GNU Texinfo 4.0 or newer installed.
 

Modified: trunk/gnue-appserver/README
===================================================================
--- trunk/gnue-appserver/README 2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/gnue-appserver/README 2004-03-07 06:27:44 UTC (rev 5249)
@@ -2,24 +2,28 @@
 
 Introduction
 ------------
-
-The GNU Enterprise Application Server (AppServer) is the core of the n-tier
-variant of the GNU Enterprise system. To the front end (be it GNUe Forms, GNUe
+The GNUe Application Server (AppServer) is the core of the n-tier variant
+of the GNU Enterprise system. To the front end (be it GNUe Forms, GNUe
 Reports or any other tool), it provides user-defineable business objects with
 arbitary fields and methods. While transforming access to those fields and
 methods into database communication and calling of scripts, it cares about
 stability, security, speed, and consistency.
 
+
+Warning
+-------
 The GNUe Appserver is still under heavy development and not ready to be used
 in production environments.  This is a preview release. Please refer to the
 file named `NEWS' to see which version you are looking at here, and what has
 changed since earlier versions.
 
+
 Installation
 ------------
 To install GNUe Appserver on your system, follow the procedure described in the
 file `INSTALL'.
 
+
 Running AppServer
 -----------------
 The directory `samples' contains several files that help you to try out GNUe

Modified: trunk/gnue-appserver/src/__init__.py
===================================================================
--- trunk/gnue-appserver/src/__init__.py        2004-03-07 00:58:55 UTC (rev 
5248)
+++ trunk/gnue-appserver/src/__init__.py        2004-03-07 06:27:44 UTC (rev 
5249)
@@ -58,3 +58,4 @@
 
 
 PACKAGE="GNUe-AppServer"
+TITLE="GNUe Application Server"
\ No newline at end of file

Modified: trunk/gnue-common/README
===================================================================
--- trunk/gnue-common/README    2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/gnue-common/README    2004-03-07 06:27:44 UTC (rev 5249)
@@ -2,7 +2,7 @@
 
 Introduction
 ------------
-GNUe-Common is the basis for the GNUe tools, such as Forms,
+GNUe Common Library is the basis for the GNUe tools, such as Forms,
 Reports, Application Server, and Designer.  It implements a
 database-abstraction layer that provides support for most major
 databases. A builtin XML-to-Object parser and Object-to-XML

Modified: trunk/gnue-common/src/__init__.py
===================================================================
--- trunk/gnue-common/src/__init__.py   2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/gnue-common/src/__init__.py   2004-03-07 06:27:44 UTC (rev 5249)
@@ -18,10 +18,10 @@
 #
 # Copyright 2001-2004 Free Software Foundation
 #
-# Description: 
+# Description:
 """
-GNUe Common is a set of python modules that provide a 
-large amount of functionality usefull in many python 
+GNUe Common is a set of python modules that provide a
+large amount of functionality usefull in many python
 programs.
 """
 
@@ -64,3 +64,4 @@
 __hexversion__ = HEXVERSION
 
 PACKAGE="GNUe-Common"
+TITLE="GNUe Common Library"

Modified: trunk/gnue-dbtools/src/__init__.py
===================================================================
--- trunk/gnue-dbtools/src/__init__.py  2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/gnue-dbtools/src/__init__.py  2004-03-07 06:27:44 UTC (rev 5249)
@@ -56,4 +56,5 @@
 __version__ = VERSION
 __hexversion__ = HEXVERSION
 
-PACKAGE="GNUe DBTools"
+PACKAGE="GNUe-DBTools"
+Title = "GNUe Database Tools"
\ No newline at end of file

Modified: trunk/gnue-forms/FAQ
===================================================================
--- trunk/gnue-forms/FAQ        2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/gnue-forms/FAQ        2004-03-07 06:27:44 UTC (rev 5249)
@@ -1,8 +1,10 @@
-INSTALLATION
+Installation
+------------
 
- Q: I want to run the cvs copy of gnuef but have a copy already installed
-    on the machine.  gnue-forms always seems run the installed code base.
-    How do I run the cvs copy without affecting the installed copy?
+ Q: I want to run the cvs copy of gnue-forms, but have a copy already
+    installed on the machine.  gnue-forms always seems run the installed
+    code base. How do I run the cvs copy without affecting the installed
+    copy?
 
  A: Run setup-cvs.py from the gnue-common/ directory.  This creates
     symlinks in the client directory to trick gnue-forms into using the CVS
@@ -12,40 +14,49 @@
     CVS gnue-forms against installed sources, just [re]move the gnue directory
     in the client directory.
 
+General
+-------
 
-ERROR MESSAGES
+ Q:  What about the curses client? Other clients?
 
- Q: I installed gnue forms but get an error about importing GFOptions.py.
+ A:  The curses client does not work properly but is under development.
+     Work on an HTML client has begun.
 
- A: You need to install the gnue-common package.  Available at 
http://www.gnue.org
-
-
  Q: I am using the PostgreSQL drivers and an getting an ImportError on pgdb.
 
  A: Try reinstalling the PygreSQL package.  Sometimes, PyGreSQL's installation
     script fails to copy this file.
 
-
  Q: I am getting an ImportError on DateTime.
 
  A: You are using a database driver that uses the mxDateTime package.
     You can download this package at
 
-        http://www.lemburg.com/files/python/mxDateTime.html
+        <http://www.lemburg.com/files/python/mxDateTime.html>
 
     Users of Debian Woody can install the package using
 
-        apt-get install python-egenix-mxdatetime
+        $ apt-get install python-egenix-mxdatetime
 
-    Note that Lemburg recently changed the directory structure of his
-    packages.  If you have installed mxDateTime and are still getting
-    this error, you may need to grab the development version of
-    PyGreSQL as the stable version (as of 05/01) still references the
-    old structure. Grab it at
 
-        ftp://ftp.druid.net/pub/distrib/PyGreSQL-beta.tgz
+ Q: I ran all the samples but a lot of them give me nasty errors and no
+    windows pop up on the display!? Some ask me for a username and a
+    password?
 
+ A: You have to set up a sample database to use some of the forms. For
+    this you must be registered as a postgres user. You must have the
+    rights to create a database.
 
+    Create a database with name gnue (issue "createdb gnue") and
+    another with name test. Enter in the directory
+    gnue/gnuef/samples/zipcode and issue "psql -f pg__zip_code.sql
+    gnue". Make sure that in all .gfd files where databases are used
+    the attribute "host" is set to "localhost" or your hostname.
+
+    There are still bugs in the wx client, but I won't list them
+    here. Ask the list, the FAQ or the Bug-report system on the gnue
+    website.
+
  Q: I need more help!
 
  A: See the individual INSTALL files for more platform specific FAQs
@@ -53,7 +64,7 @@
     If that doesn't work problems and/or questions are gladly accepted
     by the GNUE Forms team.  You can reach us via our mailing list
 
-    http://lists.gnue.org/mailman/listinfo
+        <http://lists.gnue.org/mailman/listinfo>
 
     Or via IRC at irc.freenode.net #gnuenterprise.
 

Modified: trunk/gnue-forms/INSTALL
===================================================================
--- trunk/gnue-forms/INSTALL    2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/gnue-forms/INSTALL    2004-03-07 06:27:44 UTC (rev 5249)
@@ -1,8 +1,8 @@
 Installation instructions for GNUe-Forms
 
-Quick Ref: (read below for more info)
-=====================================
-First intall the common package.
+Quick Install
+-------------
+First, install the GNUe Common package.
 
 Make sure that you have a valid /usr/local/gnue/etc/gnue.conf.
 This is installed by gnue-common package as a sample.gnue.conf.
@@ -13,16 +13,32 @@
                               also be python2.1 or python2.2)
 
 If you are upgrading from a version of forms prior to 0.5.0
-please reference the Upgrading Forms section below. 
+please reference the Upgrading Forms section below.
 
-General information:
-====================
 
-GNUe-Forms needs some prerequisites to work. You need a database
-managment system, a user-interface library, python as programming
-language and a xml-handling library.  As of this writting GNUe Forms
-allows you to use geas, postgresql, mysql, db2, oracle, or odbc.
+Requirements
+------------
+Forms needs the following in order to run:
 
+   * GNUe Common 0.5.3+
+
+   * A user-interface library:
+      - wxPython (<http://www.wxPython.org>)
+      - GTK 2
+      - QT 3
+      - NCurses
+
+Also, Forms can make use of the following tools if they are
+installed:
+
+   * GNUe Reports
+
+   * GNUe AppServer
+
+
+General Information
+-------------------
+
 Later there will be available interoperability for a variety of
 database systems via GNUe common.
 
@@ -36,13 +52,13 @@
 The rest of the document describes the steps to install GNUe-forms on
 a Debian 2.2, i386 platform.
 
-Debian packages to be installed:
-================================
+Debian packages to be installed
+-------------------------------
 You should apt-get install the following packages:
 
   python2.1-dev
   python2.1-egenix-mxdatetime
-  libwxgtk2.2-python
+  libwxgtk2.4-python
   python2.1-psycopg  (if needed -- for PostgreSQL support)
   python2.1-mysqldb  (if needed -- for MySQL support)
 
@@ -51,7 +67,7 @@
 
 
 Other software to be installed:
-===============================
+------------------------------
 
 Some of the following files have to be downloaded and installed. I will
 give short installation instructions and a compact list of the needed
@@ -63,7 +79,7 @@
 software.
 
 
-PyGreSQL.tgz:                    www.druid.net/pygresql
+  * PyGreSQL.tgz (<www.druid.net/pygresql>)
         Only needed for acceess to postgresql databases.
         Untar in some place. Enter in the directory PyGreSQL-3.0,
         issue "./setup.py build".
@@ -72,7 +88,7 @@
         "ln -s /usr/include/postgresql /usr/include/pgsql".
         As root issue finally "./setup.py install".
 
-    Note: if you are on the bleeding edge PyGreSQL
+     Note: if you are on the bleeding edge PyGreSQL
        is also included in postgreSQL cvs tree at
        pgsql/src/interfaces/python/.  If you build
        postgresql from cvs be sure to configure with
@@ -80,220 +96,58 @@
        postgresql. (currently this is not recommended as
        there may be problems with python 2.x in cvs)
 
-MySQLdb-0.3.2: http://dustman.net/andy/python/MySQLdb/
+  * MySQLdb-0.3.2 (<http://dustman.net/andy/python/MySQLdb/>)
         Only needed to access to mysql databases.
         Note: The debian package is current too old and does not work
         Untar in some place. Enter in the directory, issue
         "python setup.py build" and as root "python setup.py install".
 
-mxDateTime-2.x  http://www.lemburg.com/file/python/mxDateTime.html
+  * mxDateTime-2.x (<http://www.lemburg.com/file/python/mxDateTime.html>)
         Needed by several of the database drivers (including the
         PostgreSQL drivers). Untar in some place, enter the directory,
         and issue "python setup.py build" and as root "python setup.py
         install"
 
-wxGTK-2.2.2.tar.gz:     www.wxWidgets.org
+  * wxGTK-2.4 (<http://www.wxWidgets.org>)
         Untar in some place. Enter in the directory wxGTK, issue
         "./configure", "make" - this will run looong time, and then
-       "make install" as root.
+        "make install" as root.
 
-wxPython-2.2.2.tar.gz: www.wxpython.org/download.php
-       Untar in some place. Enter in the directory wxPython-2.2.2,
-       edit setup.py and change the following variables to the
-       indicated value (if not you get a compile time error):
+  * wxPython-2.4 (<http://www.wxpython.org/download.php>)
+        Untar in some place. Enter in the directory wxPython-2.2.2,
+        edit setup.py and change the following variables to the
+        indicated value (if not you get a compile time error):
 
-       BUILD_GLCANVAS = 0
+        BUILD_GLCANVAS = 0
 
-       issue "./setup.py build" - this will run long time, and as
-       root "./setup.py install"
+        issue "./setup.py build" - this will run long time, and as
+        root "./setup.py install"
 
-If you want to play with the curses-client you can download:
 
-pyncurses-0.3.tar.gz:   pyncurses.sourceforge.net
-        Only needed for older text interface (pytext)
-        Untar in some place. Enter in the directory pyncurses-0.3,
-        issue "debian/apply-patch.sh", "python setup.py build" and as
-        root "python setup.py install".
+Download table
+--------------
 
-        Pyncurses 0.3 does not currently build on Mac OS X.
-
-
-Download table:
-===============
-
-Distutils-1.0.1.tar.gz:          www.python.org
-PyGreSQL.tgz:            www.druid.net/pygresql
-wxGTK-2.2.2.tar.gz:      www.freiburg.linux.de/~wxxt/download.htm
-wxPython-2.2.2.tar.gz:   www.wxpython.org/download.php
+Distutils-1.0.1.tar.gz:          www.python.org
+PyGreSQL.tgz:                  www.druid.net/pygresql
+wxGTK-2.2.2.tar.gz:          www.freiburg.linux.de/~wxxt/download.htm
+wxPython-2.2.2.tar.gz:          www.wxpython.org/download.php
 mxDateTime                www.lemburg.com/files/python/mxDateTime.html
                         (this link is gone as of 12 Jan 2002)
                         (this may be included in python 2.x?)
-[pyncurses-0.3.tar.gz:   pyncurses.sourceforge.net]
+[pyncurses-0.3.tar.gz:          pyncurses.sourceforge.net]
 
 
-Download GNUe-Forms:
-====================
-
-* TODO: This section is out of date. Anonymous CVS not in use *
-
-By now I recommend anonymous cvs, so you need to have installed the
-cvs package. You can also download the corresponding tarball and untar
-it "in some place".
-
-If you connect the first time to the gnue cvs server issue:
-
-"cvs -d :pserver:address@hidden:/home/cvs login"
-
-You will be asked for a password. Just hit Enter.
-
-Every time you want to download (checkout) the actual version of gnue
-issue:
-
-"cvs -d :pserver:address@hidden:/home/cvs co gnue"
-
-If you want to update a yet checked out version issue:
-
-"cvs -d :pserver:address@hidden:/home/cvs update gnue"
-
-
-Enter in the gnue/gnuef directory. Issue "./setup.py build" and as
-root "./setyp.py install".
-
-
-Note: the current cvs defines two graphics splashScreenBMP and smallBMP in the 
GFOptions.py
-file. They are installed by default in /usr/local/gnue/shared/images
- 
-chmod -R o+rx /usr/local/gnue
-
-Now you can run your first sample: "gnue-forms samples/form.gfd".
-
-Upgrading Forms:
-================
+Upgrading Forms
+---------------
 The 0.5.0 release of gnue-forms uses a gfd format that is incompatible
-with prior releases.  A utility named gfd04to05.py has been provided 
-in the forms/utils/ directory that will convert a pre 0.5.0 form to the 
+with prior releases.  A utility named gfd04to05.py has been provided
+in the forms/utils/ directory that will convert a pre 0.5.0 form to the
 new format.
 
-Usage:
-gfd04to05.py oldFormName.gfd newFormName.gfd
+ Usage:
+   $ gfd04to05.py oldFormName.gfd newFormName.gfd
 
 If you omit the second name (newFormName.gfd) then gfd04to05.py
 will create a backup of your form in oldFormName.gfd-PRE050 then
-overwrite the existing form with the new format. 
+overwrite the existing form with the new format.
 
-Open Questions:
-===============
-
-Q: What about other linuxes or other *n*x platforms?
-
-A: Well! What about them? Tell me!
-
-   Some core developers use RedHat Linux, so it works. They had some
-   trouble with some libraries and a fix was on the mailing list, look
-   at the archive.
-   If you manage to install gnue(f) on another platform please publish
-   the recipe to the gnue mailing-list or as a last resort to me
-   (address@hidden).
-
-   Platforms gnuef is known to work on:
-     RedHat Linux
-     Debian GNU/Linux
-     Solaris 2.5.1  
-
-Q: What do I do if I cannot become root on the computer I am using?
-
-A: Short answer: Don't even try it!  
-
-   Longer answer: gnuef needs some "modules" installed and I don't
-   know how to twist python to find them in a local directory, python
-   experts could give advice but you still need to twist three other
-   python packages and one C++ package into a local
-   installation. Don't even speaking about debian packages. But maybe
-   if you are very nice to your system administrator...
-
-
-Q: I don't have version x.ww.zzz of software foo installed. Will it
-   work or do I need to install the version you mentioned.
-
-A: Short answer: try it out.
-
-   Longer answer: better don't try it out and follow my way :-)
-
-   Long answer: With all the other packages I did not even
-   check the docs. If you have some positive experience (version x.y.z
-   WORKS) let me know it and I will put it here. Oh', glibc2 is also
-   required, but I think that almost nothing works yet without it.
-
-
-Q: I ran all the samples but a lot of them give me nasty errors and no
-   windows pop up on the display!? Some ask me for a username and a
-   password?
-
-A: You have to set up a sample database to use some of the forms. For
-   this you must be registered as a postgres user. You must have the
-   rights to create a database.
-
-   Create a database with name gnue (issue "createdb gnue") and
-   another with name test. Enter in the directory
-   gnue/gnuef/samples/zipcode and issue "psql -f pg__zip_code.sql
-   gnue". Make sure that in all .gfd files where databases are used
-   the attribute "host" is set to "localhost" or your hostname.
-
-   Note: Maybe postgres has to be run over TCP/IP and not only locally
-   but I don't know exactly.
-
-   There are still bugs in the wx client, but I won't list them
-   here. Ask the list, the FAQ or the Bug-report system on the gnue
-   website. 
-
-
-Q: What about the curses client? Other clients?
-
-A: The curses client does not work properly but is under development.
-   Work on an HTML client has begun.
-
-
-Q: I have installed all the stuff and now?
-
-A: GNUe-forms is very incomplete by now. Please help development by
-   reporting any bugs you find. If you write any succesful forms
-   please donate them to the samples pool. Contact:
-
-   http://www.gnue.org
-   address@hidden
-
-   Last resort: address@hidden
-
-
-Q: I want a more actual version of gnue. I don't want to download the
-   whole gnue repository but only gnue-forms.
-
-A: Look at the chapter "Download GNUe-Forms" how to cvs-update
-   gnue. With "update" you download only the changes from your version
-   to the actual verson of the archives.
-   You can issue "... co gnue/gnuef" if you only want to download that
-   subdirectory.
-
-
-Copyright:
-==========
-
-The author of this document is Georg Lehner. You can reach me at
address@hidden or better via the mailing list
address@hidden James Thompson <address@hidden>has been 
-altering this document as information changes.
-
-This document may be redistributed freely within the terms of the Gnu
-Document Public License, look at www.gnu.org for further reference.
-
-If you do not have access to the internet the person who gave this
-document to you has to provide you with the license text also.
-
-Correctright:
-=============
-
-You have the right to correct this document as it contains a lot of
-typing and gramatic and stylistic errors and all other kind of ugly
-bugs. But please be kind and tell us all about the corrections so we
-can correct them too and spare other people linguistic and cultural
-shocks ;->

Modified: trunk/gnue-forms/README
===================================================================
--- trunk/gnue-forms/README     2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/gnue-forms/README     2004-03-07 06:27:44 UTC (rev 5249)
@@ -13,18 +13,18 @@
 Like all GNUe tools, Forms runs on most modern platforms and can communicate
 with the vast majority of modern SQL-compliant database backends. With a 
modular
 user interface kit, new interface types can be quickly and easily added. Form's
-primary user interface is the wxPython toolkit, which allows us to support
-Windows, GTK, Mac OS/X, and OS/2 out of the box. The GNUe team is also
-coordinating native KDE/QT, web-based HTML, native MS Windows, text-only 
curses,
-and and native GTK2 user interfaces. Forms includes support for Python-based
-events or triggers, for real-time validation of user data.
+primary user interface is the wxPython (<http://www.python.org>) toolkit, which
+allows us to support Windows, GTK, Mac OS/X, and OS/2 out of the box. The GNUe
+team is also coordinating native KDE/QT, web-based HTML, native MS Windows,
+text-only curses, and and native GTK2 user interfaces. Forms includes support
+for Python-based events or triggers, for real-time validation of user data.
 
-Forms will seemlessly work with GNUe App Server, GNUe Navigator, and GNUe
+Forms will seemlessly work with GNUe AppServer, GNUe Navigator, and GNUe
 Reports if they are present. However, as part of GNUe's modular framework, 
Forms
 does not require any other GNUe tools to function.
 
 Forms is currently in use in several locations, including commercial,
-non-profit, and academic.
+non-profit, and academic sites.
 
 
 Compatability

Modified: trunk/gnue-forms/src/__init__.py
===================================================================
--- trunk/gnue-forms/src/__init__.py    2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/gnue-forms/src/__init__.py    2004-03-07 06:27:44 UTC (rev 5249)
@@ -59,3 +59,4 @@
 
 
 PACKAGE="GNUe-Forms"
+TITLE="GNUe Forms"

Modified: trunk/gnue-integrator/src/__init__.py
===================================================================
--- trunk/gnue-integrator/src/__init__.py       2004-03-07 00:58:55 UTC (rev 
5248)
+++ trunk/gnue-integrator/src/__init__.py       2004-03-07 06:27:44 UTC (rev 
5249)
@@ -56,4 +56,5 @@
 __hexversion__ = HEXVERSION
 
 
-PACKAGE="GNUe Integrator"
+PACKAGE="GNUe-Integrator"
+TITLE="GNUe Integrator"

Modified: trunk/gnue-navigator/INSTALL
===================================================================
--- trunk/gnue-navigator/INSTALL        2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/gnue-navigator/INSTALL        2004-03-07 06:27:44 UTC (rev 5249)
@@ -1,6 +1,29 @@
 GNUe Navigator is a navigation/menuing system for GNU Enterprise.
 
+Requirements
+------------
+Navigator needs the following:
 
+   * GNUe Common 0.5.3+
+
+   * A user-interface library:
+      - wxPython (<http://www.wxPython.org>)
+      - GTK 2
+
+
+Also, Navigator can make use of the following tools if they are
+installed:
+
+   * GNUe Forms
+
+   * GNUe Reports
+
+   * GNUe AppServer
+
+
+General Installation
+--------------------
+
 To install:
 
   ./setup.py install

Modified: trunk/gnue-navigator/src/__init__.py
===================================================================
--- trunk/gnue-navigator/src/__init__.py        2004-03-07 00:58:55 UTC (rev 
5248)
+++ trunk/gnue-navigator/src/__init__.py        2004-03-07 06:27:44 UTC (rev 
5249)
@@ -57,3 +57,4 @@
 
 
 PACKAGE="GNUe-Navigator"
+TITLE="GNUe Navigator"

Modified: trunk/gnue-pos/src/__init__.py
===================================================================
--- trunk/gnue-pos/src/__init__.py      2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/gnue-pos/src/__init__.py      2004-03-07 06:27:44 UTC (rev 5249)
@@ -56,4 +56,5 @@
 __hexversion__ = HEXVERSION
 
 
-PACKAGE="GNUe Point-of-Sale"
+PACKAGE="GNUe-POS"
+TITLE="GNUe Point-of-Sale"

Modified: trunk/gnue-reports/README
===================================================================
--- trunk/gnue-reports/README   2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/gnue-reports/README   2004-03-07 06:27:44 UTC (rev 5249)
@@ -2,7 +2,7 @@
 
 Introduction
 ------------
-GNUe-Reports is a platform and output-independent reporting system. It reads an
+GNUe Reports is a platform and output-independent reporting system. It reads an
 XML-based report definition and generates arbitrary XML output that can further
 be translated into any format for which there is an adapter. GNUe Reports
 currently has outputs for Text, HTML, Label Stock, and CSV -- with PDF,

Modified: trunk/gnue-reports/src/__init__.py
===================================================================
--- trunk/gnue-reports/src/__init__.py  2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/gnue-reports/src/__init__.py  2004-03-07 06:27:44 UTC (rev 5249)
@@ -57,3 +57,4 @@
 
 
 PACKAGE="GNUe-Reports"
+TITLE="GNUe Reports"

Added: trunk/www/releases/release-files/README
===================================================================
--- trunk/www/releases/release-files/README     2004-03-07 00:58:55 UTC (rev 
5248)
+++ trunk/www/releases/release-files/README     2004-03-07 06:27:44 UTC (rev 
5249)
@@ -0,0 +1,21 @@
+This directory contains the various support files (README, NEWS, INSTALL, 
+ChangeLog) used to build the website. 
+
+This keeps svn HEAD instructions from appearing on the website. 
+
+Normally, when a tool is released, that tool's files should be copied into
+this directory, named with the format <tool>.<file>. 
+
+For example, 
+
+  forms.README
+  appserver.NEWS
+
+If major changes are made to a file after a release, and you wish to
+have those changes made visible on the website immediately, copy that
+file here. 
+
+If a file does not exist in the current directory, then the web scripts
+will attempt to fetch the current version from the svn repository. 
+(Usually this is only the case for a new tool that has never been 
+released.)

Modified: trunk/www/utils/create-release-announcements
===================================================================
--- trunk/www/utils/create-release-announcements        2004-03-07 00:58:55 UTC 
(rev 5248)
+++ trunk/www/utils/create-release-announcements        2004-03-07 06:27:44 UTC 
(rev 5249)
@@ -1,5 +1,23 @@
 #!/usr/bin/python
 
-from helpers import files
+from helpers.files import SVN_BASE
+from helpers.tools import Tool
 
-from helpers.tools import Tool
\ No newline at end of file
+import sys
+
+
+def syntax():
+  print """Syntax: %s (forms|reports|appserver|designer|..) [..]""" % 
(sys.argv[0])
+  sys.exit()
+
+
+
+if __name__ == '__main__':
+  modules = sys.argv[1:]
+
+  if not len(modules):
+    syntax()
+
+  tools = []
+  for module in modules:
+    tools.append(Tool(module))

Added: trunk/www/utils/helpers/docutils/COPYING.txt
===================================================================
--- trunk/www/utils/helpers/docutils/COPYING.txt        2004-03-07 00:58:55 UTC 
(rev 5248)
+++ trunk/www/utils/helpers/docutils/COPYING.txt        2004-03-07 06:27:44 UTC 
(rev 5249)
@@ -0,0 +1,124 @@
+==================
+ Copying Docutils
+==================
+
+:Author: David Goodger
+:Contact: address@hidden
+:Date: $Date: 2003/06/16 03:26:53 $
+:Web site: http://docutils.sourceforge.net/
+:Copyright: This document has been placed in the public domain.
+
+Most of the files included in this project have been placed in the
+public domain, and therefore have no license requirements and no
+restrictions on copying or usage; see the `Public Domain Dedication`_
+below.  There are a few exceptions_, listed below.
+
+One goal of the Docutils project is to be included in the Python
+standard library distribution, at which time it is expected that
+copyright will be asserted by the `Python Software Foundation
+<http://www.python.org/psf/>`_.
+
+
+Public Domain Dedication
+========================
+
+The persons who have associated their work with this project (the
+"Dedicator": David Goodger and the many contributors to the Docutils
+project) hereby dedicate the entire copyright, less the exceptions_
+listed below, in the work of authorship known as "Docutils" identified
+below (the "Work") to the public domain.
+
+The primary repository for the Work is the Internet World Wide Web
+site <http://docutils.sourceforge.net/>.  The Work consists of the
+files within the "docutils" module of the Docutils project CVS
+repository (Internet host cvs.sourceforge.net, filesystem path
+/cvsroot/docutils), whose Internet web interface is located at
+<http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/docutils/docutils/>.
+Files dedicated to the public domain may be identified by the
+inclusion, near the beginning of each file, of a declaration of the
+form::
+
+    Copyright: This document/module/DTD/stylesheet/file/etc. has been
+               placed in the public domain.
+
+Dedicator makes this dedication for the benefit of the public at large
+and to the detriment of Dedicator's heirs and successors.  Dedicator
+intends this dedication to be an overt act of relinquishment in
+perpetuity of all present and future rights under copyright law,
+whether vested or contingent, in the Work.  Dedicator understands that
+such relinquishment of all rights includes the relinquishment of all
+rights to enforce (by lawsuit or otherwise) those copyrights in the
+Work.
+
+Dedicator recognizes that, once placed in the public domain, the Work
+may be freely reproduced, distributed, transmitted, used, modified,
+built upon, or otherwise exploited by anyone for any purpose,
+commercial or non-commercial, and in any way, including by methods
+that have not yet been invented or conceived.
+
+(This dedication is derived from the text of the `Creative Commons
+Public Domain Dedication
+<http://creativecommons.org/licenses/publicdomain>`_.)
+
+
+Exceptions
+==========
+
+The exceptions to the `Public Domain Dedication`_ above are:
+
+* extras/optparse.py, copyright by Gregory P. Ward, released under a
+  BSD-style license (which can be found in the module's source code).
+
+* extras/textwrap.py, copyright by Gregory P. Ward and the Python
+  Software Foundation, released under the `Python 2.3 license`_
+  (`local copy`__).
+
+  __ licenses/python-2-3.txt
+
+* extras/roman.py, copyright by Mark Pilgrim, released under the
+  `Python 2.1.1 license`_ (`local copy`__).
+
+  __ licenses/python-2-1-1.txt
+
+* test/difflib.py, copyright by the Python Software Foundation,
+  released under the `Python 2.2 license`_ (`local copy`__).  This
+  file is included for compatibility with Python versions less than
+  2.2; if you have Python 2.2 or higher, difflib.py is not needed and
+  may be removed.  (It's only used to report test failures anyhow; it
+  isn't installed anywhere.  The included file is a pre-generator
+  version of the difflib.py module included in Python 2.2.)
+
+  __ licenses/python-2-2.txt
+
+* tools/pep2html.py, copyright by the Python Software Foundation,
+  released under the `Python 2.2 license`_ (`local copy`__).
+
+  __ licenses/python-2-2.txt
+
+* tools/editors/emacs/rst-html.el, copyright by Martin Blais, released
+  under the `GNU General Public License`_ (`local copy`__).
+
+  __ licenses/gpl.txt
+
+* tools/editors/emacs/rst-mode.el, copyright by Stefan Merten,
+  released under the `GNU General Public License`_ (`local copy`__).
+
+  __ licenses/gpl.txt
+
+(Disclaimer: I am not a lawyer.)  The BSD license and the Python
+licenses are OSI-approved_ and GPL-compatible_.  Although complicated
+by multiple owners and lots of legalese, the Python license basically
+lets you copy, use, modify, and redistribute files as long as you keep
+the copyright attribution intact, note any changes you make, and don't
+use the owner's name in vain.  The BSD license is similar.
+
+Plaintext versions of all the linked-to licenses are provided in the
+licenses_ directory.
+
+.. _licenses: licenses/
+.. _Python 2.1.1 license: http://www.python.org/2.1.1/license.html
+.. _Python 2.2 license: http://www.python.org/2.2/license.html
+.. _Python 2.3 license: http://www.python.org/2.3/license.html
+.. _GNU General Public License: http://www.gnu.org/copyleft/gpl.html
+.. _OSI-approved: http://opensource.org/licenses/
+.. _GPL-compatible: http://www.gnu.org/philosophy/license-list.html

Added: trunk/www/utils/helpers/docutils/FAQ.txt
===================================================================
--- trunk/www/utils/helpers/docutils/FAQ.txt    2004-03-07 00:58:55 UTC (rev 
5248)
+++ trunk/www/utils/helpers/docutils/FAQ.txt    2004-03-07 06:27:44 UTC (rev 
5249)
@@ -0,0 +1,478 @@
+=====================================
+ Docutils Frequently Asked Questions
+=====================================
+
+:Date: $Date: 2003/06/09 21:26:29 $
+:Web site: http://docutils.sourceforge.net/
+:Copyright: This document has been placed in the public domain.
+
+.. Please note that until there's a Q&A-specific construct available,
+   this FAQ will use section titles for questions.  Therefore
+   questions must fit on one line.  The title may be a summary of the
+   question, with the full question in the section body.
+
+
+.. contents::
+.. sectnum::
+
+
+This is a work in progress.  Please feel free to ask questions and/or
+provide answers; `send email`__ to the `Docutils-Users mailing
+list`__.  Project members should feel free to edit the source text
+file directly.
+
+.. _let us know:
+__ mailto:address@hidden
+__ http://lists.sourceforge.net/lists/listinfo/docutils-users
+
+
+Docutils
+========
+
+What is Docutils?
+-----------------
+
+Docutils_ is a system for processing plaintext documentation into
+useful formats, such as HTML, XML, and TeX.  It supports multiple
+types of input, such as standalone files (implemented), inline
+documentation from Python modules and packages (under development),
+`PEPs (Python Enhancement Proposals)`_ (implemented), and others as
+discovered.
+
+For an overview of the Docutils project implementation, see `PEP
+258`_, "Docutils Design Specification".
+
+Docutils is implemented in Python_.
+
+.. _Docutils: http://docutils.sourceforge.net/
+.. _PEPs (Python Enhancement Proposals):
+   http://www.python.org/peps/pep-0012.html
+.. _PEP 258: spec/pep-0258.html
+.. _Python: http://www.python.org/
+
+
+Why is it called "Docutils"?
+----------------------------
+
+Docutils is short for "Python Documentation Utilities".  The name
+"Docutils" was inspired by "Distutils", the Python Distribution
+Utilities architected by Greg Ward, a component of Python's standard
+library.
+
+The earliest known use of the term "docutils" in a Python context was
+a `fleeting reference`__ in a message by Fred Drake on 1999-12-02 in
+the Python Doc-SIG mailing list.  It was suggested `as a project
+name`__ on 2000-11-27 on Doc-SIG, again by Fred Drake, in response to
+a question from Tony "Tibs" Ibbs: "What do we want to *call* this
+thing?".  This was shortly after David Goodger first `announced
+reStructuredText`__ on Doc-SIG.
+
+Tibs used the name "Docutils" for `his effort`__ "to document what the
+Python docutils package should support, with a particular emphasis on
+documentation strings".  Tibs joined the current project (and its
+predecessors) and graciously donated the name.
+
+For more history of reStructuredText and the Docutils project, see `An
+Introduction to reStructuredText`_.
+
+Please note that the name is "Docutils", not "DocUtils" or "Doc-Utils"
+or any other variation.
+
+.. _An Introduction to reStructuredText: spec/rst/introduction.html
+__ http://mail.python.org/pipermail/doc-sig/1999-December/000878.html
+__ http://mail.python.org/pipermail/doc-sig/2000-November/001252.html
+__ http://mail.python.org/pipermail/doc-sig/2000-November/001239.html
+__ http://homepage.ntlworld.com/tibsnjoan/docutils/STpy.html
+
+
+Is there a GUI authoring environment for Docutils?
+--------------------------------------------------
+
+DocFactory_ is under development.  It uses wxPython and looks very
+promising.
+
+.. _DocFactory:
+   http://docutils.sf.net/sandbox/gschwant/docfactory/doc/
+
+
+What is the status of the Docutils project?
+-------------------------------------------
+
+Although useful and relatively stable, Docutils is experimental code,
+with APIs and architecture subject to change.
+
+Our highest priority is to fix bugs as they are reported.  So the
+latest code from CVS (or `development snapshots`_) is almost always
+the most stable (bug-free) as well as the most featureful.
+
+
+What is the Docutils project release policy?
+--------------------------------------------
+
+It ought to be "release early & often", but official releases are a
+significant effort and aren't done that often.  We have
+automatically-generated `development snapshots`_ which always contain
+the latest code from CVS.  As the project matures, we may formalize on
+a stable/development-branch scheme, but we're not using anything like
+that yet.
+
+If anyone would like to volunteer as a release coordinator, please
+`contact the project coordinator`_.
+
+.. _development snapshots:
+   http://docutils.sf.net/#development-snapshots
+
+.. _contact the project coordinator:
+   mailto:address@hidden
+
+
+reStructuredText
+================
+
+What is reStructuredText?
+-------------------------
+
+reStructuredText_ is an easy-to-read, what-you-see-is-what-you-get
+plaintext markup syntax and parser system.  The reStructuredText
+parser is a component of Docutils_.  reStructuredText is a revision
+and reinterpretation of the StructuredText_ and Setext_ lightweight
+markup systems.
+
+If you are reading this on the web, you can see for yourself.  `The
+source for this FAQ <FAQ.txt>`_ is written in reStructuredText; open
+it in another window and compare them side by side.
+
+`A ReStructuredText Primer <docs/rst/quickstart.html>`_ and the `Quick
+reStructuredText <docs/rst/quickref.html>`_ user reference are a good
+place to start.  The `reStructuredText Markup Specification
+<spec/rst/reStructuredText.html>`_ is a detailed technical
+specification.
+
+.. _reStructuredText: http://docutils.sourceforge.net/rst.html
+.. _StructuredText:
+   http://dev.zope.org/Members/jim/StructuredTextWiki/FrontPage/
+.. _Setext: mirror/setext.html
+
+
+Why is it called "reStructuredText"?
+------------------------------------
+
+The name came from a combination of "StructuredText", one of
+reStructuredText's predecessors, with "re": "revised", "reworked", and
+"reinterpreted", and as in the ``re.py`` regular expression module.
+For a detailed history of reStructuredText and the Docutils project,
+see `An Introduction to reStructuredText`_.
+
+
+What's the standard abbreviation for "reStructuredText"?
+--------------------------------------------------------
+
+"RST" and "ReST" (or "reST") are both acceptable.  Care should be
+taken with capitalization, to avoid confusion with "REST__", an
+acronym for "Representational State Transfer".
+
+The abbreviations "reSTX" and "rSTX"/"rstx" should **not** be used;
+they overemphasize reStructuredText's precedessor, Zope's
+StructuredText.
+
+__ http://www.xml.com/pub/a/2002/02/06/rest.html
+
+
+What's the standard filename extension for a reStructuredText file?
+-------------------------------------------------------------------
+
+It's ".txt".  Some people would like to use ".rest" or ".rst" or
+".restx", but why bother?  ReStructuredText source files are meant to
+be readable as plaintext, and most operating systems already associate
+".txt" with text files.  Using a specialized filename extension would
+require that users alter their OS settings, which is something that
+many users will not be willing or able to do.
+
+
+Are there any reStructuredText editor extensions?
+-------------------------------------------------
+
+There is `some code under development for Emacs`__.
+
+Extensions for other editors are welcome.
+
+__ http://docutils.sf.net/tools/editors/emacs/
+
+
+How can I indicate the document title?  Subtitle?
+-------------------------------------------------
+
+A uniquely-adorned section title at the beginning of a document is
+treated specially, as the document title.  Similarly, a
+uniquely-adorned section title immediately after the document title
+becomes the document subtitle.  For example::
+
+    This is the Document Title
+    ==========================
+
+    This is the Document Subtitle
+    -----------------------------
+
+    Here's an ordinary paragraph.
+
+Counterexample::
+
+    Here's an ordinary paragraph.
+
+    This is *not* a Document Title
+    ==============================
+
+    The "ordinary paragraph" above the section title
+    prevents it from becoming the document title.
+
+
+How can I represent esoteric characters (e.g. character entities) in a 
document?
+--------------------------------------------------------------------------------
+
+For example, say you want an em-dash (XML character entity &mdash;,
+Unicode character ``\u2014``) in your document: use a real em-dash.
+Insert concrete characters (e.g. type a *real* em-dash) into your
+input file, using whatever encoding suits your application, and tell
+Docutils the input encoding.  Docutils uses Unicode internally, so the
+em-dash character is a real em-dash internally.
+
+ReStructuredText has no character entity subsystem; it doesn't know
+anything about XML charents.  "&mdash;" in input text is 7 discrete
+characters to Docutils; no interpretation happens.  When writing HTML,
+the "&" is converted to "&amp;", so in the output you'd see
+"&amp;mdash;".  There's no difference in interpretation for text
+inside or outside inline literals or literal blocks -- no character
+entity interpretation in either case.
+
+If you can't use a Unicode-compatible encoding and must rely on 7-bit
+ASCII, there is a workaround, although ugly.  David Priest developed a
+substitution table for character entities; see
+<http://article.gmane.org/gmane.comp.python.documentation/432> and
+David's other March Doc-SIG posts.  Incorporating this into Docutils
+is on the `to-do list <spec/notes.html#to-do>`_.
+
+If you insist on using XML-style charents, you'll have to implement a
+pre-processing system to convert to UTF-8 or something.  That opens a
+can of worms though; you can no longer *write* about charents
+naturally; you'd have to write "&amp;mdash;".
+
+
+How can I generate backticks using a Scandinavian keyboard?
+-----------------------------------------------------------
+
+The use of backticks in reStructuredText is a bit awkward with
+Scandinavian keyboards, where the backtick is a "dead" key.  To get
+one ` character one must press SHIFT-` + SPACE.
+
+Unfortunately, with all the variations out there, there's no way to
+please everyone.  For Scandinavian programmers and technical writers,
+this is not limited to reStructuredText but affects many languages and
+environments.
+
+Possible solutions include
+
+* If you have to input a lot of backticks, simply type one in the
+  normal/awkward way, select it, copy and then paste the rest (CTRL-V
+  is a lot faster than SHIFT-` + SPACE).
+
+* Use keyboard macros.
+
+* Remap the keyboard.  The Scandinavian keyboard layout is awkward for
+  other programming/technical characters too; for example, []{}
+  etc. are a bit awkward compared to US keyboards.
+
+If anyone knows of other/better solutions, please `let us know`_.
+
+
+Are there any tools for HTML/XML-to-reStructuredText?  (Round-tripping)
+-----------------------------------------------------------------------
+
+People have tossed the idea around, but little if any actual work has
+ever been done.  There's no reason why reStructuredText should not be
+round-trippable to/from XML; any technicalities which prevent
+round-tripping would be considered bugs.  Whitespace would not be
+identical, but paragraphs shouldn't suffer.  The tricky parts would be
+the smaller details, like links and IDs and other bookkeeping.
+
+For HTML, true round-tripping may not be possible.  Even adding lots
+of extra "class" attributes may not be enough.  A "simple HTML" to RST
+filter is possible -- for some definition of "simple HTML" -- but HTML
+is used as dumb formatting so much that such a filter may not be
+particularly useful.  No general-purpose filter exists.  An 80/20
+approach should work though: build a tool that does 80% of the work
+automatically, leaving the other 20% for manual tweaks.
+
+
+Are there any Wikis that use reStructuredText syntax?
+-----------------------------------------------------
+
+There are several, with various degrees of completeness.  With no
+implied endorsement or recommendation, and in no particular order:
+
+* `Ian Bicking's experimental code <sandbox/ianb/wiki/WikiPage.py>`__
+* `MoinMoin <http://moin.sf.net>`__ has some support; `here's a sample
+  <http://twistedmatrix.com/users/jh.twistd/moin/moin.cgi/RestSample>`__
+* Zope-based `Zwiki <http://zwiki.org/>`__
+* `StikiWiki <http://mithrandr.moria.org/code/stikiwiki/>`__
+
+Please `let us know`_ of any other reStructuredText Wikis.
+
+The example application for the `Web Framework Shootout
+<http://colorstudy.com/docs/shootout.html>` article is a Wiki using
+reStructuredText.
+
+
+Are there any Weblog (Blog) projects that use reStructuredText syntax?
+----------------------------------------------------------------------
+
+With no implied endorsement or recommendation, and in no particular
+order:
+
+* `Python Desktop Server <http://pyds.muensterland.org/>`__
+* `PyBloxsom <http://roughingit.subtlehints.net/pyblosxom>`__
+
+Please `let us know`_ of any other reStructuredText Blogs.
+
+
+HTML Writer
+===========
+
+What is the status of the HTML Writer?
+--------------------------------------
+
+The HTML Writer module, ``docutils/writers/html4css1.py``, is a
+proof-of-concept reference implementation.  While it is a complete
+implementation, some aspects of the HTML it produces may be
+incompatible with older browsers or specialized applications (such as
+web templating).  Alternate implementations are welcome.
+
+
+What kind of HTML does it produce?
+----------------------------------
+
+It produces XHTML compatible with the `HTML 4.01`_ and `XHTML 1.0`_
+specifications.  A cascading style sheet ("default.css" by default) is
+required for proper viewing with a modern graphical browser.  Correct
+rendering of the HTML produced depends on the CSS support of the
+browser.
+
+.. _HTML 4.01: http://www.w3.org/TR/html4/
+.. _XHTML 1.0: http://www.w3.org/TR/xhtml1/
+
+
+What browsers are supported?
+----------------------------
+
+No specific browser is targeted; all modern graphical browsers should
+work.  Some older browsers, text-only browsers, and browsers without
+full CSS support are known to produce inferior results.  Mozilla
+(version 1.0 and up) and MS Internet Explorer (version 5.0 and up) are
+known to give good results.  Reports of experiences with other
+browsers are welcome.
+
+
+Unexpected results from tools/html.py: H1, H1 instead of H1, H2.  Why?
+----------------------------------------------------------------------
+
+Here's the question in full:
+
+    I have this text::
+
+        Heading 1
+        =========
+
+        All my life, I wanted to be H1.
+
+        Heading 1.1
+        -----------
+
+        But along came H1, and so shouldn't I be H2?
+        No!  I'm H1!
+
+        Heading 1.1.1
+        *************
+
+        Yeah, imagine me, I'm stuck at H3!  No?!?
+
+    When I run it through tools/html.py, I get unexpected results
+    (below).  I was expecting H1, H2, then H3; instead, I get H1, H1,
+    H2::
+
+        ...
+        <html lang="en">
+        <head>
+        ...
+        <title>Heading 1</title>
+        <link rel="stylesheet" href="default.css" type="text/css" />
+        </head>
+        <body>
+        <div class="document" id="heading-1">
+        <h1 class="title">Heading 1</h1>                <-- first H1
+        <p>All my life, I wanted to be H1.</p>
+        <div class="section" id="heading-1-1">
+        <h1><a name="heading-1-1">Heading 1.1</a></h1>        <-- H1
+        <p>But along came H1, and so now I must be H2.</p>
+        <div class="section" id="heading-1-1-1">
+        <h2><a name="heading-1-1-1">Heading 1.1.1</a></h2>
+        <p>Yeah, imagine me, I'm stuck at H3!</p>
+        ...
+
+    What gives? 
+
+Check the "class" attribute on the H1 tags, and you will see a
+difference.  The first H1 is actually ``<h1 class="title">``; this is
+the document title, and the default stylesheet renders it centered.
+There can also be an ``<h2 class="subtitle">`` for the document
+subtitle.
+
+If there's only one highest-level section title at the beginning of a
+document, it is treated specially, as the document title.  (Similarly,
+a lone second-highest-level section title may become the document
+subtitle.)  Rather than use a plain H1 for that, we use ``<h1
+class="title">`` so that we can use H1 again within the document.  Why
+do we do this?  HTML only has H1-H6, so by making H1 do double duty,
+we effectively reserve these tags to provide 6 levels of heading
+beyond the single document title.
+
+HTML is being used for dumb formatting for nothing but final display.
+A stylesheet *is required*, and one is provided:
+tools/stylesheets/default.css.  Of course, you're welcome to roll your
+own.
+
+(Thanks to Mark McEahern for the question and much of the answer.)
+
+
+Why do enumerated lists only use numbers (no letters or roman numerals)?
+------------------------------------------------------------------------
+
+The rendering of enumerators (the numbers or letters acting as list
+markers) is completely governed by the stylesheet, so either the
+browser can't find the stylesheet (try using the "--embed-stylesheet"
+option), or the browser can't understand it (try a recent Mozilla or
+MSIE).
+
+
+Python Source Reader
+====================
+
+Can I use Docutils for Python auto-documentation?
+-------------------------------------------------
+
+Docstring extraction is still under development.  There is most of a
+source code parsing module in docutils/readers/python/moduleparser.py.
+I (David Goodger) haven't worked on it in a while, but I do plan to
+finish it eventually.  Ian Bicking wrote an initial front end for my
+moduleparser module, in sandbox/ianb/extractor/extractor.py.
+
+
+Miscellaneous
+=============
+
+Is the Docutils document model based on any existing XML models?
+----------------------------------------------------------------
+
+Not directly, no.  It borrows bits from DocBook, HTML, and others.  I
+(David Goodger) have designed several document models over the years,
+and have my own biases.  The Docutils document model is designed for
+simplicity and extensibility, and has been influenced by the needs of
+the reStructuredText markup.

Added: trunk/www/utils/helpers/docutils/HISTORY.txt
===================================================================
--- trunk/www/utils/helpers/docutils/HISTORY.txt        2004-03-07 00:58:55 UTC 
(rev 5248)
+++ trunk/www/utils/helpers/docutils/HISTORY.txt        2004-03-07 06:27:44 UTC 
(rev 5249)
@@ -0,0 +1,1138 @@
+==================
+ Docutils History
+==================
+
+:Author: David Goodger
+:Contact: address@hidden
+:Date: $Date: 2003/06/25 01:47:04 $
+:Web site: http://docutils.sourceforge.net/
+:Copyright: This document has been placed in the public domain.
+
+.. contents::
+
+Acknowledgements
+================
+
+I would like to acknowledge the people who have made a direct impact
+on the Docutils project, knowingly or not, in terms of encouragement,
+suggestions, criticism, bug reports, code contributions, tasty treats,
+and related projects:
+
+    Aahz, David Abrahams, David Ascher, Eric Bellot, Ian Bicking,
+    Martin Blais, Fred Bremmer, Simon Budig, Bill Bumgarner, Brett
+    Cannon, Adam Chodorowski, Jason Diamond, William Dode, Fred Drake,
+    Dethe Elza, Marcus Ertl, Benja Fallenstein, fantasai, Stefane
+    Fermigier, Jim Fulton, Peter Funk, Jorge Gonzalez, Engelbert
+    Gruber, Simon Hefti, Doug Hellmann, Juergen Hermann, Michael
+    Hudson, Marcelo Huerta San Martin, Ludger Humbert, Jeremy Hylton,
+    Tony Ibbs, Alan Jaffray, Dmitry Jemerov, Richard Jones, Andreas
+    Jung, Garth Kidd, Nicola Larosa, Daniel Larsson, Marc-Andre
+    Lemburg, Julien Letessier, Wolfgang Lipp, Edward Loper, Dallas
+    Mahrt, Ken Manheimer, Vasko Miroslav, Skip Montanaro, Paul Moore,
+    Nigel W. Moriarty, Mark Nodine, Patrick K. O'Brien, Michel
+    Pelletier, Sam Penrose, Tim Peters, Pearu Peterson, Mark Pilgrim,
+    Brett g Porter, David Priest, Jens Quade, Andy Robinson, Tavis
+    Rudd, Oliver Rutherfurd, Kenichi Sato, Ueli Schlaepfer, Gunnar
+    Schwant, Bruce Smith, tav, Bob Tolbert, Paul Tremblay, Laurence
+    Tratt, Guido van Rossum, Martin von Loewis, Greg Ward, Barry
+    Warsaw, Edward Welbourne, Ka-Ping Yee, Moshe Zadka
+
+Thank you!
+
+(I'm still waiting for contributions of computer equipment and cold
+hard cash :-).)  Hopefully I haven't forgotten anyone or misspelled
+any names; apologies (and please let me know!) if I have.
+
+
+Future Plans
+============
+
+* The config file layout will be completely overhauled for the 0.4
+  release.
+
+* Include substitution files for character entities, produced by the
+  tools/unicode2rstsubs.py.  As static data, these files could go
+  inside the docutils package somewhere.
+
+* Rename front-end tools (perhaps to "rst2*" pattern) and have
+  setup.py install them.
+
+* A Python Source Reader component (Python auto-documentation) will be
+  added as soon as there's enough time, effort, and will.  If you'd
+  like to help, let me know!
+
+
+Release 0.3 (2003-06-24)
+========================
+
+General:
+
+* Renamed "attribute" to "option" for directives/extensions.
+
+* Renamed transform method "transform" to "apply".
+
+* Renamed "options" to "settings" for runtime settings (as set by
+  command-line options).  Sometimes "option" (singular) became
+  "settings" (plural).  Some variations below:
+
+  - document.options -> document.settings (stored in other objects as
+    well)
+  - option_spec -> settings_spec (not directives though)
+  - OptionSpec -> SettingsSpec
+  - cmdline_options -> settings_spec
+  - relative_path_options -> relative_path_settings
+  - option_default_overrides -> settings_default_overrides
+  - Publisher.set_options -> Publisher.get_settings
+
+Specific:
+
+* COPYING.txt: Added "Public Domain Dedication".
+
+* FAQ.txt: Frequently asked questions, added to project.
+
+* setup.py:
+
+  - Updated with PyPI Trove classifiers.
+  - Conditional installation of third-party modules.
+
+* docutils/__init__.py:
+
+  - Bumped version to 0.2.1 to reflect changes to I/O classes.
+  - Bumped version to 0.2.2 to reflect changes to stylesheet options.
+  - Factored ``SettingsSpec`` out of ``Component``; separately useful.
+  - Bumped version to 0.2.3 because of the new "--embed-stylesheet"
+    option and its effect on the PEP template & writer.
+  - Bumped version to 0.2.4 due to changes to the PEP template &
+    stylesheet.
+  - Bumped version to 0.2.5 to reflect changes to Reporter output.
+  - Added ``TransformSpec`` class for new transform system.
+  - Bumped version to 0.2.6 for API changes (renaming).
+  - Bumped version to 0.2.7 for new ``docutils.core.publish_*``
+    convenience functions.
+  - Added ``Component.component_type`` attribute.
+  - Bumped version to 0.2.8 because of the internal parser switch from
+    plain lists to the docutils.statemachine.StringList objects.
+  - Bumped version to 0.2.9 because of the frontend.py API changes.
+  - Bumped version to 0.2.10 due to changes to the project layout
+    (third-party modules removed from the "docutils" package), and
+    signature changes in ``io.Input``/``io.Output``.
+  - Changed version to 0.3.0 for release.
+
+* docutils/core.py:
+
+  - Made ``publish()`` a bit more convenient.
+  - Generalized ``Publisher.set_io``.
+  - Renamed ``publish()`` to ``publish_cmdline()``; rearranged its
+    parameters; improved its docstring.
+  - Added ``publish_file()`` and ``publish_string()``.
+  - Factored ``Publisher.set_source()`` and ``.set_destination()``
+    out of ``.set_io``.
+  - Added support for "--dump-pseudo-xml", "--dump-settings", and
+    "--dump-transforms" hidden options.
+  - Added ``Publisher.apply_transforms()`` method.
+  - Added ``Publisher.set_components()`` method; support for
+    ``publish_*()`` conveninece functions.
+  - Moved config file processing to docutils/frontend.py.
+  - Added support for exit status ("exit_level" setting &
+    ``enable_exit`` parameter for Publisher.publish() and convenience
+    functions).
+
+* docutils/frontend.py:
+
+  - Check for & exit on identical source & destination paths.
+  - Fixed bug with absolute paths & "--config".
+  - Set non-command-line defaults in ``OptionParser.__init__()``:
+    ``_source`` & ``_destination``.
+  - Distributed ``relative_path_settings`` to components; updated
+    ``OptionParser.populate_from_components()`` to combine it all.
+  - Require list of keys in ``make_paths_absolute`` (was implicit in
+    global ``relative_path_settings``).
+  - Added "--expose-internal-attribute", "--dump-pseudo-xml",
+    "--dump-settings", and "--dump-transforms" hidden options.
+  - Removed nasty internals-fiddling ``ConfigParser.get_section``
+    code, replaced with correct code.
+  - Added validation functionality for config files.
+  - Added "--error-encoding" option/setting, "_disable_config"
+    internal setting.
+  - Added encoding validation; updated "--input-encoding" and
+    "--output-encoding"; added "--error-encoding-error-handler" and
+    "--output-encoding-error-handler".
+  - Moved config file processing from docutils/core.py.
+  - Updated ``OptionParser.populate_from_components`` to handle new
+    ``SettingsSpec.settings_defaults`` dict.
+  - Added support for "-" => stdin/stdout.
+  - Added "exit_level" setting ("--exit" option).
+
+* docutils/io.py:
+
+  - Split ``IO`` classes into subclasses of ``Input`` and ``Output``.
+  - Added automatic closing to ``FileInput`` and ``FileOutput``.
+  - Delayed opening of ``FileOutput`` file until ``write()`` called.
+  - ``FileOutput.write()`` now returns the encoded output string.
+  - Try to get path/stream name automatically in ``FileInput`` &
+    ``FileOutput``.
+  - Added defaults for source & destination paths.
+  - Allow for Unicode I/O with an explicit "unicode" encoding.
+  - Added ``Output.encode()``.
+  - Removed dependency on runtime settings; pass encoding directly.
+  - Recognize Unicode strings in ``Input.decode()``.
+  - Added support for output encoding error handlers.
+
+* docutils/nodes.py:
+
+  - Added "Invisible" element category class.
+  - Changed ``Node.walk()`` & ``.walkabout()`` to permit more tree
+    modification during a traversal.
+  - Added element classes: ``line_block``, ``generated``, ``address``,
+    ``sidebar``, ``rubric``, ``attribution``, ``admonition``,
+    ``superscript``, ``subscript``, ``inline``
+  - Added support for lists of nodes to ``Element.insert()``.
+  - Fixed parent linking in ``Element.replace()``.
+  - Added new abstract superclass ``FixedTextElement``; adds
+    "xml:space" attribute.
+  - Added support for "line" attribute of ``system_message`` nodes.
+  - Added support for the observer pattern from ``utils.Reporter``.
+    Added ``parse_messages`` and ``transform_messages`` attributes to
+    ``document``, removed ``messages``.  Added ``note_parse_message``
+    and ``note_transform_message`` methods.
+  - Added support for improved diagnostics:
+
+    - Added "document", "source", and "line" internal attributes to
+      ``Node``, set by ``Node.setup_child()``.
+    - Converted variations on ``node.parent = self`` to
+      ``self.setup_child(node)``.
+    - Added ``document.current_source`` & ``.current_line``
+      attributes, and ``.note_source`` observer method.
+    - Changed "system_message" output to GNU-Tools format.
+
+  - Added a "rawsource" attribute to the ``Text`` class, for text
+    before backslash-escape resolution.
+  - Support for new transform system.
+  - Reworked ``pending`` element.
+  - Fixed XML DOM bug (SF #660611).
+  - Removed the ``interpeted`` element class and added
+    ``title_reference``, ``abbreviation``, ``acronym``.
+  - Made substitutions case-sensitive-but-forgiving; moved some code
+    from the parser.
+  - Fixed Unicode bug on element attributes (report: William Dode).
+
+* docutils/optik.py: Removed from project; replaced with
+  extras/optparse.py and extras/textwrap.py.  These will be installed
+  only if they're not already present in the Python installation.
+
+* docutils/roman.py: Moved to extras/roman.py; this will be installed
+  only if it's not already present in the Python installation.
+
+* docutils/statemachine.py:
+
+  - Factored out ``State.add_initial_transitions()`` so it can be
+    extended.
+  - Converted whitespace-specific "blank" and "indent" transitions
+    from special-case code to ordinary transitions: removed
+    ``StateMachineWS.check_line()`` & ``.check_whitespace()``, added
+    ``StateWS.add_initial_transitions()`` method, ``ws_patterns`` &
+    ``ws_initial_transitions`` attributes.
+  - Removed ``State.match_transition()`` after merging it into
+    ``.check_line()``.
+  - Added ``StateCorrection`` exception.
+  - Added support for ``StateCorrection`` in ``StateMachine.run()``
+    (moved ``TransitionCorrection`` support there too.)
+  - Changed ``StateMachine.next_line()`` and ``.goto_line()`` to raise
+    ``EOFError`` instead of ``IndexError``.
+  - Added ``State.no_match`` method.
+  - Added support for the Observer pattern, triggered by input line
+    changes.
+  - Added ``strip_top`` parameter to
+    ``StateMachineWS.get_first_known_indented``.
+  - Made ``context`` a parameter to ``StateMachine.run()``.
+  - Added ``ViewList`` & ``StringList`` classes;
+    ``extract_indented()`` becomes ``StringList.get_indented()``.
+  - Added ``StateMachine.insert_input()``.
+  - Fixed ViewList slice handling for Python 2.3.  Patch from (and
+    thanks to) Fred Drake.
+
+* docutils/utils.py:
+
+  - Added a ``source`` attribute to Reporter instances and
+    ``system_message`` elements.
+  - Added an observer pattern to ``utils.Reporter`` to keep track of
+    system messages.
+  - Fixed bugs in ``relative_path()``.
+  - Added support for improved diagnostics.
+  - Moved ``normalize_name()`` to nodes.py (``fully_normalize_name``).
+  - Added support for encoding Reporter stderr output, and encoding
+    error handlers.
+  - Reporter keeps track of the highest level system message yet
+    generated.
+
+* docutils/languages: Fixed bibliographic field language lookups.
+
+* docutils/languages/es.py: Added to project; Spanish mappings by
+  Marcelo Huerta San Martin.
+
+* docutils/languages/fr.py: Added to project; French mappings by
+  Stefane Fermigier.
+
+* docutils/languages/it.py: Added to project; Italian mappings by
+  Nicola Larosa.
+
+* docutils/languages/sk.py: Added to project; Slovak mappings by
+  Miroslav Vasko.
+
+* docutils/parser/__init__.py:
+
+  - Added ``Parser.finish_parse()`` method.
+
+* docutils/parser/rst/__init__.py:
+
+  - Added options: "--pep-references", "--rfc-references",
+    "--tab-width", "--trim-footnote-reference-space".
+
+* docutils/parsers/rst/states.py:
+
+  - Changed "title under/overline too short" system messages from INFO
+    to WARNING, and fixed its insertion location.
+  - Fixed enumerated list item parsing to allow paragraphs & section
+    titles to begin with enumerators.
+  - Converted system messages to use the new "line" attribute.
+  - Fixed a substitution reference edge case.
+  - Added support for "--pep-references" and "--rfc-references"
+    options; reworked ``Inliner`` code to make customization easier.
+  - Removed field argument parsing.
+  - Added support for short section title over/underlines.
+  - Fixed "simple reference name" regexp to ignore text like
+    "object.__method__"; not an anonymous reference.
+  - Added support for improved diagnostics.
+  - Reworked directive API, based on Dethe Elza's contribution.  Added
+    ``Body.parse_directive()``, ``.parse_directive_options()``,
+    ``.parse_directive_arguments()`` methods.
+  - Added ``ExtensionOptions`` class, to parse directive options
+    without parsing field bodies.  Factored
+    ``Body.parse_field_body()`` out of ``Body.field()``, overridden in
+    ``ExtensionOptions``.
+  - Improved definition list term/classifier parsing.
+  - Added warnings for unknown directives.
+  - Renamed ``Stuff`` to ``Struct``.
+  - Now flagged as errors: transitions at the beginning or end of
+    sections, empty sections (except title), and empty documents.
+  - Updated for ``statemachine.StringList``.
+  - Enabled recognition of schemeless email addresses in targets.
+  - Added support for embedded URIs in hyperlink references.
+  - Added backslash-escapes to inline markup end-string suffix.
+  - Added support for correct interpreted text processing.
+  - Fixed nested title parsing (topic, sidebar directives).
+  - Added special processing of backslash-escaped whitespace (idea
+    from David Abrahams).
+  - Made substitutions case-sensitive-but-forgiving; moved some code
+    to ``docutils.nodes``.
+  - Added support for block quote attributions.
+  - Added a kludge to work-around a conflict between the bubble-up
+    parser strategy and short titles (<= 3 char-long over- &
+    underlines).  Fixes SF bug #738803 "infinite loop with multiple
+    titles" submitted by Jason Diamond.
+  - Added explicit interpreted text roles for standard inline markup:
+    "emphasis", "strong", "literal".
+  - Implemented "superscript" and "subscript" interpreted text roles.
+  - Added initial support for "abbreviation" and "acronym" roles;
+    incomplete.
+  - Added support for "--trim-footnote-reference-space" option.
+  - Optional space before colons in directives & hyperlink targets.
+
+* docutils/parsers/rst/tableparser.py:
+
+  - Fixed a bug that was producing unwanted empty rows in "simple"
+    tables.
+  - Detect bad column spans in "simple" tables.
+
+* docutils/parsers/rst/directives: Updated all directive functions to
+  new API.
+
+* docutils/parsers/rst/directives/__init__.py:
+
+  - Added ``flag()``, ``unchanged()``, ``path()``,
+    ``nonnegative_int()``, ``choice()``, and ``class_option()``
+    directive option helper functions.
+  - Added warnings for unknown directives.
+  - Return ``None`` for missing directives.
+  - Added ``register_directive()``, thanks to William Dode and Paul
+    Moore.
+
+* docutils/parsers/rst/directives/admonitions.py:
+
+  - Added "admonition" directive.
+
+* docutils/parsers/rst/directives/body.py: Added to project.  Contains
+  the "topic", "sidebar" (from Patrick O'Brien), "line-block",
+  "parsed-literal", "rubric", "epigraph", "highlights" and
+  "pull-quote" directives.
+
+* docutils/parsers/rst/directives/images.py:
+
+  - Added an "align" attribute to the "image" & "figure" directives
+    (by Adam Chodorowski).
+  - Added "class" option to "image", and "figclass" to "figure".
+
+* docutils/parsers/rst/directives/misc.py:
+
+  - Added "include", "raw", and "replace" directives, courtesy of
+    Dethe Elza.
+  - Added "unicode" and "class" directives.
+
+* docutils/parsers/rst/directives/parts.py:
+
+  - Added the "sectnum" directive; by Dmitry Jemerov.
+  - Added "class" option to "contents" directive.
+
+* docutils/parsers/rst/directives/references.py: Added to project.
+  Contains the "target-notes" directive.
+
+* docutils/parsers/rst/languages/__init__.py:
+
+  - Return ``None`` from get_language() for missing language modules.
+
+* docutils/parsers/rst/languages/de.py: Added to project; German
+  mappings by Engelbert Gruber.
+
+* docutils/parsers/rst/languages/en.py:
+
+  - Added interpreted text roles mapping.
+
+* docutils/parsers/rst/languages/es.py: Added to project; Spanish
+  mappings by Marcelo Huerta San Martin.
+
+* docutils/parsers/rst/languages/fr.py: Added to project; French
+  mappings by William Dode.
+
+* docutils/parsers/rst/languages/it.py: Added to project; Italian
+  mappings by Nicola Larosa.
+
+* docutils/parsers/rst/languages/sk.py: Added to project; Slovak
+  mappings by Miroslav Vasko.
+
+* docutils/readers/__init__.py:
+
+  - Added support for the observer pattern from ``utils.Reporter``, in
+    ``Reader.parse`` and ``Reader.transform``.
+  - Removed ``Reader.transform()`` method.
+  - Added default parameter values to ``Reader.__init__()`` to make
+    instantiation easier.
+  - Removed bogus aliases: "restructuredtext" is *not* a Reader.
+
+* docutils/readers/pep.py:
+
+  - Added the ``peps.TargetNotes`` transform to the Reader.
+  - Removed PEP & RFC reference detection code; moved to
+    parsers/rst/states.py as options (enabled here by default).
+  - Added support for pre-acceptance PEPs (no PEP number yet).
+  - Moved ``Inliner`` & made it a class attribute of ``Reader`` for
+    easy subclassing.
+
+* docutils/readers/python: Python Source Reader subpackage added to
+  project, including preliminary versions of:
+
+  - __init__.py
+  - moduleparser.py: Parser for Python modules.
+
+* docutils/transforms/__init__.py:
+
+  - Added ``Transformer`` class and completed transform reform.
+
+* docutils/transforms/frontmatter.py:
+
+  - Improved support for generic fields.
+  - Fixed bibliographic field language lookups.
+
+* docutils/transforms/misc.py: Added to project.  Miscellaneous
+  transforms.
+
+* docutils/transforms/parts.py:
+
+  - Moved the "id" attribute from TOC list items to the references
+    (``Contents.build_contents()``).
+  - Added the ``SectNum`` transform; by Dmitry Jemerov.
+  - Added "class" attribute support to ``Contents``.
+
+* docutils/transforms/peps.py:
+
+  - Added ``mask_email()`` function, updating to pep2html.py's
+    functionality.
+  - Linked "Content-Type: text/x-rst" to PEP 12.
+  - Added the ``TargetNotes`` PEP-specific transform.
+  - Added ``TargetNotes.cleanup_callback``.
+  - Added title check to ``Headers``.
+  
+* docutils/transforms/references.py:
+
+  - Added the ``TargetNotes`` generic transform.
+  - Split ``Hyperlinks`` into multiple transforms.
+  - Fixed bug with multiply-indirect references (report: Bruce Smith).
+  - Added check for circular indirect references.
+  - Made substitutions case-sensitive-but-forgiving.
+
+* docutils/transforms/universal.py:
+
+  - Added support for the "--expose-internal-attributes" option.
+  - Removed ``Pending`` transform classes & data.
+
+* docutils/writers/__init__.py:
+
+  - Removed ``Writer.transform()`` method.
+
+* docutils/writers/docutils-xml.py:
+
+  - Added XML and doctype declarations.
+  - Added "--no-doctype" and "--no-xml-declaration" options.
+
+* docutils/writers/html4css1.py:
+
+  - "name" attributes only on these tags: a, applet, form, frame,
+    iframe, img, map.
+  - Added "name" attribute to <a> in section titles for Netscape 4
+    support (bug report: Pearu Peterson).
+  - Fixed targets (names) on footnote, citation, topic title,
+    problematic, and system_message nodes (for Netscape 4).
+  - Changed field names from "<td>" to "<th>".
+  - Added "@" to "&#64;" encoding to thwart address harvesters.
+  - Improved the vertical whitespace optimization; ignore "invisible"
+    nodes (targets, comments, etc.).
+  - Improved inline literals with ``<span class="pre">`` around chunks
+    of text and ``&nbsp;`` for runs of spaces.
+  - Improved modularity of output; added ``self.body_pre_docinfo`` and
+    ``self.docinfo`` segments.
+  - Added support for "line_block", "address" elements.
+  - Improved backlinks (footnotes & system_messages).
+  - Improved system_message output.
+  - Redefined "--stylesheet" as containing an invariant URL, used
+    verbatim.  Added "--stylesheet-path", interpreted w.r.t. the
+    working directory.
+  - Added "--footnote-references" option (superscript or brackets).
+  - Added "--compact-lists" and "--no-compact-lists" options.
+  - Added "--embed-stylesheet" and "--link-stylesheet" options;
+    factored out ``HTMLTranslator.get_stylesheet_reference()``.
+  - Improved field list rendering.
+  - Added Docutils version to "generator" meta tag.
+  - Fixed a bug with images; they must be inline, so wrapped in <p>.
+  - Improved layout of <pre> HTML source.
+  - Fixed attribute typo on <colspec>.
+  - Refined XML prologue.
+  - Support for no stylesheet.
+  - Removed "interpreted" element support.
+  - Added support for "title_reference", "sidebar", "attribution",
+    "rubric", and generic "admonition" elements.
+  - Added "--attribution" option.
+  - Added support for "inline", "subscript", "superscript" elements.
+  - Added initial support for "abbreviation" and "acronym";
+    incomplete.
+
+* docutils/writers/latex2e.py: LaTeX Writer, added by Engelbert Gruber
+  (from the sandbox).
+
+  - Added french.
+  - Double quotes in literal blocks (special treatment for de/ngerman).
+  - Added '--hyperlink-color' option ('0' turns off coloring of links).
+  - Added  "--attribution" option.
+  - Right align attributions. 
+
+* docutils/writers/pep_html.py:
+
+  - Parameterized output encoding in PEP template.
+  - Reworked substitutions from ``locals()`` into ``subs`` dict.
+  - Redefined "--pep-stylesheet" as containing an invariant URL, used
+    verbatim.  Added "--pep-stylesheet-path", interpreted w.r.t. the
+    working directory.
+  - Added an override on the "--footnote-references" option.
+  - Factored out ``HTMLTranslator.get_stylesheet_reference()``.
+  - Added Docutils version to "generator" meta tag.
+  - Added a "DO NOT EDIT THIS FILE" comment to generated HTML.
+
+* docs/tools.txt:
+
+  - Added a "silent" setting for ``buildhtml.py``.
+  - Added a "Getting Help" section.
+  - Rearranged the structure.
+  - Kept up to date, with new settings, command-line options etc.
+  - Added section for ``rst2latex.py`` (Engelbert Gruber).
+  - Converted settings table into a definition list.
+
+* docs/rst/quickstart.txt:
+
+  - Added a table of contents.
+  - Added feedback information.
+  - Added mention of minimum section title underline lengths.
+  - Removed the 4-character minimum for section title underlines.
+
+* docs/rst/quickref.html:
+
+  - Added a "Getting Help" section.
+  - Added a style to make section title backlinks more subtle.
+  - Added mention of minimum section title underline lengths.
+  - Removed the 4-character minimum for section title underlines.
+
+* extras: Directory added to project; contains third-party modules
+  that Docutils depends on (optparse, textwrap, roman).  These are
+  only installed if they're not already present.
+
+* licenses: Directory added to project; contains copies of license
+  files for non-public-domain files.
+
+* spec/doctree.txt:
+
+  - Changed the focus.  It's about DTD elements:  structural
+    relationships, semantics, and external (public) attributes.  Not
+    about the element class library.
+  - Moved some implementation-specific stuff into ``docutils.nodes``
+    docstrings.
+  - Wrote descriptions of all common attributes and parameter
+    entities.  Filled in introductory material.
+  - Working through the element descriptions: 55 down, 37 to go.
+  - Removed "Representation of Horizontal Rules" to
+    spec/rst/alternatives.txt.
+
+* spec/docutils.dtd:
+
+  - Added "generated" inline element.
+  - Added "line_block" body element.
+  - Added "auto" attribute to "title".
+  - Changed content models of "literal_block" and "doctest_block" to
+    ``%text.model``.
+  - Added ``%number;`` attribute type parameter entity.
+  - Changed ``%structural.elements;`` to ``%section.elements``.
+  - Updated attribute types; made more specific.
+  - Added "address" bibliographic element.
+  - Added "line" attribute to ``system_message`` element.
+  - Removed "field_argument" element; "field_name" may contain
+    multiple words and whitespace.
+  - Changed public identifier to docutils.sf.net.
+  - Removed "interpreted" element; added "title_reference",
+    "abbreviation", "acronym".
+  - Removed "refuri" attribute from "footnote_reference" and
+    "citation_reference".
+  - Added "sidebar", "rubric", "attribution", "admonition",
+    "superscript", "subscript", and "inline" elements.
+
+* spec/pep-0256.txt: Converted to reStructuredText & updated.
+
+* spec/pep-0257.txt: Converted to reStructuredText & updated.
+
+* spec/pep-0258.txt: Converted to reStructuredText & updated.
+
+* spec/semantics.txt: Updated with text from a Doc-SIG response to
+  Dallas Mahrt.
+
+* spec/transforms.txt: Added to project.
+
+* spec/howto: Added subdirectory, for developer how-to docs.
+
+* spec/howto/rst-directives.txt: Added to project.  Original by Dethe
+  Elza, edited & extended by David Goodger.
+
+* spec/howto/i18n.txt: Docutils Internationalization.  Added to
+  project.
+
+* spec/rst/alternatives.txt:
+
+  - Added "Doctree Representation of Transitions" from
+    spec/doctree.txt.
+  - Updated "Inline External Targets" & closed the debate.
+  - Added ideas for interpreted text syntax extensions.
+  - Added "Nested Inline Markup" section.
+
+* spec/rst/directives.txt:
+
+  - Added directives: "topic", "sectnum", "target-notes",
+    "line-block", "parsed-literal", "include", "replace", "sidebar",
+    "admonition", "rubric", "epigraph", "highlights", "unicode" and
+    "class".
+  - Formalized descriptions of directive details.
+  - Added an "align" attribute to the "image" & "figure" directives
+    (by Adam Chodorowski).
+  - Added "class" options to "topic", "sidebar", "line-block",
+    "parsed-literal", "contents", and "image"; and "figclass" to
+    "figure".
+
+* spec/rst/interpreted.txt: Added to project.  Descriptions of
+  interpreted text roles.
+
+* spec/rst/introduction.txt:
+
+  - Added pointers to material for new users.
+
+* spec/rst/reStructuredText.txt:
+
+  - Disambiguated comments (just add a newline after the "::").
+  - Updated enumerated list description; added a discussion of the
+    second-line validity checking.
+  - Updated directive description.
+  - Added a note redirecting newbies to the user docs.
+  - Expanded description of inline markup start-strings in non-markup
+    contexts.
+  - Removed field arguments and made field lists a generic construct.
+  - Removed the 4-character minimum for section title underlines.
+  - Clarified term/classifier delimiter & inline markup ambiguity
+    (definition lists).
+  - Added "Embedded URIs".
+  - Updated "Interpreted Text" section.
+  - Added "Character-Level Inline Markup" section.
+
+* test: Continually adding & updating tests.
+
+  - Moved test/test_rst/ to test/test_parsers/test_rst/.
+  - Moved test/test_pep/ to test/test_readers/test_pep/.
+  - Added test/test_readers/test_python/.
+  - Added test/test_writers/ (Engelbert Gruber).
+
+* tools:
+
+  - Made the ``locale.setlocale()`` calls in front ends
+    fault-tolerant.
+
+* tools/buildhtml.py:
+
+  - Added "--silent" option.
+  - Fixed bug with absolute paths & "--config".
+  - Updated for new I/O classes.
+  - Added some exception handling.
+  - Separated publishers' setting defaults; prevents interference.
+  - Updated for new ``publish_file()`` convenience function.
+
+* tools/pep-html-template:
+
+  - Allow for "--embed-stylesheet".
+  - Added Docutils version to "generator" meta tag.
+  - Added a "DO NOT EDIT THIS FILE" comment to generated HTML.
+  - Conform to XHTML spec.
+
+* tools/pep2html.py:
+
+  - Made ``argv`` a parameter to ``main()``.
+  - Added support for "Content-Type:" header & arbitrary PEP formats.
+  - Linked "Content-Type: text/plain" to PEP 9.
+  - Files skipped (due to an error) are not pushed onto the server.
+  - Updated for new I/O classes.
+  - Added ``check_requirements()`` & ``pep_type_error()``.
+  - Added some exception handling.
+  - Updated for new ``publish_string()`` convenience function.
+  - Added a "DO NOT EDIT THIS FILE" comment to generated HTML.
+
+* tools/quicktest.py:
+
+  - Added "-V"/"--version" option.
+
+* tools/rst2latex.py: LaTeX front end, added by Engelbert Gruber.
+
+* tools/unicode2rstsubs.py: Added to project.  Produces character
+  entity files (reSructuredText substitutions) from the MathML master
+  unicode.xml file.
+
+* tools/editors: Support code for editors, added to project.  Contains
+  ``emacs/restructuredtext.el``.
+
+* tools/stylesheets/default.css: Moved into the stylesheets directory.
+
+  - Added style for chunks of inline literals.
+  - Removed margin for first child of table cells.
+  - Right-aligned field list names.
+  - Support for auto-numbered section titles in TOCs.
+  - Increased the size of inline literals (<tt>) in titles.
+  - Restored the light gray background for inline literals.
+  - Added support for "line_block" elements.
+  - Added style for "address" elements.
+  - Removed "a.footnote-reference" style; doing it with ``<sup>`` now.
+  - Improved field list rendering.
+  - Vertical whitespace improvements.
+  - Removed "a.target" style.
+
+* tools/stylesheets/pep.css:
+
+  - Fixed nested section margins.
+  - Other changes parallel those of ``../default.css``.
+
+
+Release 0.2 (2002-07-31)
+========================
+
+General:
+
+- The word "component" was being used ambiguously.  From now on,
+  "component" will be used to mean "Docutils component", as in Reader,
+  Writer, Parser, or Transform.  Portions of documents (Table of
+  Contents, sections, etc.)  will be called "document parts".
+- Did a grand renaming: a lot of ``verylongnames`` became
+  ``very_long_names``.
+- Cleaned up imports: no more relative package imports or
+  comma-separated lists of top-level modules.
+- Added support for an option values object which carries default
+  settings and overrides (from command-line options and library use).
+- Added internal Unicode support, and support for both input and
+  output encodings.
+- Added support for the ``docutils.io.IO`` class & subclasses.
+
+Specific:
+
+* docutils/__init__.py:
+
+  - Added ``ApplicationError`` and ``DataError``, for use throughout
+    the package.
+  - Added ``Component`` base class for Docutils components; implements
+    the ``supports`` method.
+  - Added ``__version__`` (thus, ``docutils.__version__``).
+
+* docutils/core.py:
+
+  - Removed many keyword parameters to ``Publisher.__init__()`` and
+    ``publish()``; bundled into an option values object.  Added
+    "argv", "usage", "description", and "option_spec" parameters for
+    command-line support.
+  - Added ``Publisher.process_command_line()`` and ``.set_options()``
+    methods.
+  - Reworked I/O model for ``docutils.io`` wrappers.
+  - Updated ``Publisher.set_options()``; now returns option values
+    object.
+  - Added support for configuration files (/etc/docutils.conf,
+    ./docutils.conf, ~/.docutils).
+  - Added ``Publisher.setup_option_parser()``.
+  - Added default usage message and description.
+
+* docutils/frontend.py: Added to project; support for front-end
+  (command-line) scripts.  Option specifications may be augmented by
+  components.  Requires Optik (http://optik.sf.net/) for option
+  processing (installed locally as docutils/optik.py).
+
+* docutils/io.py: Added to project; uniform API for a variety of input
+  output mechanisms.
+
+* docutils/nodes.py:
+
+  - Added ``TreeCopyVisitor`` class.
+  - Added a ``copy`` method to ``Node`` and subclasses.
+  - Added a ``SkipDeparture`` exception for visitors.
+  - Renamed ``TreePruningException`` from ``VisitorException``.
+  - Added docstrings to ``TreePruningException``, subclasses, and
+    ``Nodes.walk()``.
+  - Improved docstrings.
+  - Added ``SparseNodeVisitor``, refined ``NodeVisitor``.
+  - Moved ``utils.id()`` to ``nodes.make_id()`` to avoid circular
+    imports.
+  - Added ``decoration``, ``header``, and ``footer`` node classes, and
+    ``PreDecorative`` mixin.
+  - Reworked the name/id bookkeeping; to ``document``, removed
+    ``explicit_targets`` and ``implicit_targets`` attributes, added
+    ``nametypes`` attribute and ``set_name_id_map`` method.
+  - Added ``NodeFound`` exception, for use with ``NodeVisitor``
+    traversals.
+  - Added ``document.has_name()`` method.
+  - Fixed DOM generation for list-attributes.
+  - Added category class ``Labeled`` (used by footnotes & citations).
+  - Added ``Element.set_class()`` method (sets "class" attribute).
+
+* docutils/optik.py: Added to project.  Combined from the Optik
+  package, with added option groups and other modifications.  The use
+  of this module is probably only temporary.
+
+* docutils/statemachine.py:
+
+  - Added ``runtime_init`` method to ``StateMachine`` and ``State``.
+  - Added underscores to improve many awkward names.
+  - In ``string2lines()``, changed whitespace normalizing translation
+    table to regexp; restores Python 2.0 compatibility with Unicode.
+
+* docutils/urischemes.py:
+
+  - Filled in some descriptions.
+  - Added "shttp" scheme.
+
+* docutils/utils.py:
+
+  - Added ``clean_rcs_keywords`` function (moved from
+    docutils/transforms/frontmatter.py
+    ``DocInfo.filter_rcs_keywords``).
+  - Added underscores to improve many awkward names.
+  - Changed names of Reporter's thresholds:
+    warning_level -> report_level; error_level -> halt_level.
+  - Moved ``utils.id()`` to ``nodes.make_id()``.
+  - Added ``relative_path(source, target)``.
+
+* docutils/languages/de.py: German mappings; added to project.  Thanks
+  to Gunnar Schwant for the translations.
+
+* docutils/languages/en.py: Added "Dedication" bibliographic field
+  mappings.
+
+* docutils/languages/sv.py: Swedish mappings; added to project by Adam
+  Chodorowski.
+
+* docutils/parsers/rst/states.py:
+
+  - Added underscores to improve many awkward names.
+  - Added RFC-2822 header support.
+  - Extracted the inline parsing code from ``RSTState`` to a separate
+    class, ``Inliner``, which will allow easy subclassing.
+  - Made local bindings for ``memo`` container & often-used contents
+    (reduces code complexity a lot).  See ``RSTState.runtime_init()``.
+  - ``RSTState.parent`` replaces ``RSTState.statemachine.node``.
+  - Added ``MarkupMismatch`` exception; for late corrections.
+  - Added ``-/:`` characters to inline markup's start string prefix,
+    ``/`` to end string suffix.
+  - Fixed a footnote bug.
+  - Fixed a bug with literal blocks.
+  - Applied patch from Simon Budig: simplified regexps with symbolic
+    names, removed ``Inliner.groups`` and ``Body.explicit.groups``.
+  - Converted regexps from ``'%s' % var`` to ``'%(var)s' % locals()``.
+  - Fixed a bug in ``Inliner.interpreted_or_phrase_ref()``.
+  - Allowed non-ASCII in "simple names" (directive names, field names,
+    references, etc.).
+  - Converted ``Inliner.patterns.initial`` to be dynamically built
+    from parts with ``build_regexp()`` function.
+  - Changed ``Inliner.inline_target`` to ``.inline_internal_target``.
+  - Updated docstrings.
+  - Changed "table" to "grid_table"; added "simple_table" support.
+
+* docutils/parsers/rst/tableparser.py:
+
+  - Changed ``TableParser`` to ``GridTableParser``.
+  - Added ``SimpleTableParser``.
+  - Refactored naming.
+
+* docutils/parsers/rst/directives/__init__.py: Added "en" (English) as
+  a fallback language for directive names.
+
+* docutils/parsers/rst/directives/html.py: Changed the ``meta``
+  directive to use a ``pending`` element, used only by HTML writers.
+
+* docutils/parsers/rst/directives/parts.py: Renamed from
+  components.py.
+
+  - Added "backlinks" attribute to "contents" directive.
+
+* docutils/parsers/rst/languages/sv.py: Swedish mappings; added to
+  project by Adam Chodorowski.
+
+* docutils/readers/__init__.py: Gave Readers more control over
+  choosing and instantiating Parsers.
+
+* docutils/readers/pep.py: Added to project; for PEP processing.
+
+* docutils/transforms/__init__.py: ``Transform.__init__()`` now
+  requires a ``component`` parameter.
+
+* docutils/transforms/components.py: Added to project; transforms
+  related to Docutils components.
+
+* docutils/transforms/frontmatter.py:
+
+  - In ``DocInfo.extract_authors``, check for a single "author" in an
+    "authors" group, and convert it to a single "author" element.
+  - Added support for "Dedication" and generic bibliographic fields.
+
+* docutils/transforms/peps.py: Added to project; PEP-specific.
+
+* docutils/transforms/parts.py: Renamed from old components.py.
+
+  - Added filter for `Contents`, to use alt-text for inline images,
+    and to remove inline markup that doesn't make sense in the ToC.
+  - Added "name" attribute to TOC topic depending on its title.
+  - Added support for optional TOC backlinks.
+
+* docutils/transforms/references.py: Fixed indirect target resolution
+  in ``Hyperlinks`` transform.
+
+* docutils/transforms/universal.py:
+
+  - Changed ``Messages`` transform to properly filter out system
+    messages below the warning threshold.
+  - Added ``Decorations`` transform (support for ``--generator``,
+    ``--date``, ``--time``, ``--source-link`` options).
+
+* docutils/writers/__init__.py: Added "pdf" alias in anticipation of
+  Engelbert Gruber's PDF writer.
+
+* docutils/writers/html4css1.py:
+
+  - Made XHTML-compatible (switched to lowercase element & attribute
+    names; empty tag format).
+  - Escape double-dashes in comment text.
+  - Improved boilerplate & modularity of output.
+  - Exposed modular output in Writer class.
+  - Added a "generator" meta tag to <head>.
+  - Added support for the ``--stylesheet`` option.
+  - Added support for ``decoration``, ``header``, and ``footer``
+    elements.
+  - In ``HTMLTranslator.attval()``, changed whitespace normalizing
+    translation table to regexp; restores Python 2.0 compatibility
+    with Unicode.
+  - Added the translator class as instance variable to the Writer, to
+    make it easily subclassable.
+  - Improved option list spacing (thanks to Richard Jones).
+  - Modified field list output.
+  - Added backlinks to footnotes & citations.
+  - Added percentage widths to "<col>" tags (from colspec).
+  - Option lists: "<code>" changed to "<kbd>", ``option_argument``
+    "<span>" changed to "<var>".
+  - Inline literals: "<code>" changed to "<tt>".
+  - Many changes to optimize vertical space: compact simple lists etc.
+  - Add a command-line options & directive attributes to control TOC
+    and footnote/citation backlinks.
+  - Added support for optional footnote/citation backlinks.
+  - Added support for generic bibliographic fields.
+  - Identify backrefs.
+  - Relative URLs for stylesheet links.
+
+* docutils/writers/pep_html.py: Added to project; HTML Writer for
+  PEPs (subclass of ``html4css1.Writer``).
+
+* docutils/writers/pseudoxml.py: Renamed from pprint.py.
+
+* docutils/writers/docutils_xml.py: Added to project; trivial writer
+  of the Docutils internal doctree in XML.
+
+* docs/tools.txt: "Docutils Front-End Tools", added to project.
+
+* spec/doctree.txt:
+
+  - Changed the title to "The Docutils Document Tree".
+  - Added "Hyperlink Bookkeeping" section.
+
+* spec/docutils.dtd:
+
+  - Added ``decoration``, ``header``, and ``footer`` elements.
+  - Brought ``interpreted`` element in line with the parser: changed
+    attribute "type" to "role", added "position".
+  - Added support for generic bibliographic fields.
+
+* spec/notes.txt: Continual updates.  Added "Project Policies".
+
+* spec/pep-0256.txt:  Updated.  Added "Roadmap to the Doctring PEPs"
+  section.
+
+* spec/pep-0257.txt: Clarified prohibition of signature repetition.
+
+* spec/pep-0258.txt: Updated.  Added text from pysource.txt and
+  mailing list discussions.
+
+* spec/pep-0287.txt:
+
+  - Renamed to "reStructuredText Docstring Format".
+  - Minor edits.
+  - Reworked Q&A as an enumerated list.
+  - Converted to reStructuredText format.
+
+* spec/pysource.dtd:
+
+  - Reworked structural elements, incorporating ideas from Tony Ibbs.
+
+* spec/pysource.txt: Removed from project.  Moved much of its contents
+  to pep-0258.txt.
+
+* spec/rst/alternatives.txt:
+
+  - Expanded auto-enumerated list idea; thanks to Fred Bremmer.
+  - Added "Inline External Targets" section.
+
+* spec/rst/directives.txt:
+
+  - Added "backlinks" attribute to "contents" directive.
+
+* spec/rst/problems.txt:
+
+  - Updated the Enumerated List Markup discussion.
+  - Added new alternative table markup syntaxes.
+
+* spec/rst/reStructuredText.txt:
+
+  - Clarified field list usage.
+  - Updated enumerated list description.
+  - Clarified purpose of directives.
+  - Added ``-/:`` characters to inline markup's start string prefix,
+    ``/`` to end string suffix.
+  - Updated "Authors" bibliographic field behavior.
+  - Changed "inline hyperlink targets" to "inline internal targets".
+  - Added "simple table" syntax to supplement the existing but
+    newly-renamed "grid tables".
+  - Added cautions for anonymous hyperlink use.
+  - Added "Dedication" and generic bibliographic fields.
+
+* test: Made test modules standalone (subdirectories became packages).
+
+* test/DocutilsTestSupport.py:
+
+  - Added support for PEP extensions to reStructuredText.
+  - Added support for simple tables.
+  - Refactored naming.
+
+* test/package_unittest.py: Renamed from UnitTestFolder.py.
+
+  - Now supports true packages containing test modules
+    (``__init__.py`` files required); fixes duplicate module name bug.
+
+* test/test_pep/: Subpackage added to project; PEP testing.
+
+* test/test_rst/test_SimpleTableParser.py: Added to project.
+
+* tools:
+
+  - Updated html.py and publish.py front-end tools to use the new
+    command-line processing facilities of ``docutils.frontend``
+    (exposed in ``docutils.core.Publisher``), reducing each to just a
+    few lines of code.
+  - Added ``locale.setlocale()`` calls to front-end tools.
+
+* tools/buildhtml.py: Added to project; batch-generates .html from all
+  the .txt files in directories and subdirectories.
+
+* tools/default.css:
+
+  - Added support for ``header`` and ``footer`` elements.
+  - Added styles for "Dedication" topics (biblio fields).
+
+* tools/docutils.conf: A configuration file; added to project.
+
+* tools/docutils-xml.py: Added to project.
+
+* tools/pep.py: Added to project; PEP to HTML front-end tool.
+
+* tools/pep-html-template: Added to project.
+
+* tools/pep2html.py: Added to project from Python (nondist/peps).
+  Added support for Docutils (reStructuredText PEPs).
+
+* tools/quicktest.py:
+
+  - Added the ``--attributes`` option, hacked a bit.
+  - Added a second command-line argument (output file); cleaned up.
+
+* tools/stylesheets/: Subdirectory added to project.
+
+* tools/stylesheets/pep.css: Added to project; stylesheet for PEPs.
+
+
+Release 0.1 (2002-04-20)
+========================
+
+This is the first release of Docutils, merged from the now inactive
+reStructuredText__ and `Docstring Processing System`__ projects.  For
+the pre-Docutils history, see the `reStructuredText HISTORY`__ and the
+`DPS HISTORY`__ files.
+
+__ http://structuredtext.sourceforge.net/
+__ http://docstring.sourceforge.net/
+__ http://structuredtext.sourceforge.net/HISTORY.html
+__ http://docstring.sourceforge.net/HISTORY.html
+
+General changes: renamed 'dps' package to 'docutils'; renamed
+'restructuredtext' subpackage to 'rst'; merged the codebases; merged
+the test suites (reStructuredText's test/test_states renamed to
+test/test_rst); and all modifications required to make it all work.
+
+* docutils/parsers/rst/states.py:
+
+  - Improved diagnostic system messages for missing blank lines.
+  - Fixed substitution_reference bug.
+
+
+..
+   Local Variables:
+   mode: indented-text
+   indent-tabs-mode: nil
+   sentence-end-double-space: t
+   fill-column: 70
+   End:

Added: trunk/www/utils/helpers/docutils/README.gnue
===================================================================
--- trunk/www/utils/helpers/docutils/README.gnue        2004-03-07 00:58:55 UTC 
(rev 5248)
+++ trunk/www/utils/helpers/docutils/README.gnue        2004-03-07 06:27:44 UTC 
(rev 5249)
@@ -0,0 +1,12 @@
+This file is used to create the website html from our various TEXT
+files. The bulk of this directory is in the public domain, with a 
+few exceptions.  The exceptions are GPL-compatable. 
+
+NOTE: This is based on docutils 0.3, but I have modified it to 
+better work with our file formats. More specifically: 
+
+  * 
+
+  * 
+
+  * 

Added: trunk/www/utils/helpers/docutils/README.txt
===================================================================
--- trunk/www/utils/helpers/docutils/README.txt 2004-03-07 00:58:55 UTC (rev 
5248)
+++ trunk/www/utils/helpers/docutils/README.txt 2004-03-07 06:27:44 UTC (rev 
5249)
@@ -0,0 +1,330 @@
+==================
+ README: Docutils
+==================
+
+:Author: David Goodger
+:Contact: address@hidden
+:Date: $Date: 2003/06/24 15:20:16 $
+:Web site: http://docutils.sourceforge.net/
+:Copyright: This document has been placed in the public domain.
+
+.. contents::
+
+
+Thank you for downloading the Python Docutils project archive.  As
+this is a work in progress, please check the project website for
+updated working files (snapshots).  This project should be considered
+highly experimental; APIs are subject to change at any time.
+
+
+Quick-Start
+===========
+
+This is for those who want to get up & running quickly.  Read on for
+complete details.
+
+1. Get and install the latest release of Python, available from
+
+       http://www.python.org/
+
+   Python 2.2 or later [#py21]_ is required; Python 2.2.2 or later is
+   recommended.
+
+2. Use the latest Docutils code.  Get the code from CVS or from the
+   snapshot:
+
+       http://docutils.sf.net/docutils-snapshot.tgz
+
+   See `Releases & Snapshots`_ below for details.
+
+3. Unpack the tarball and install with the standard ::
+
+       python setup.py install
+
+   See Installation_ below for details.
+
+4. Use a front-end tool from the "tools" subdirectory of the same
+   directory as in step 3.  For example::
+
+       cd tools
+       html.py test.txt test.html
+
+   See Usage_ below for details.
+
+
+Purpose
+=======
+
+The purpose of the Docutils project is to create a set of tools for
+processing plaintext documentation into useful formats, such as HTML,
+XML, and TeX.  Support for the following sources has been implemented:
+
+* Standalone files.
+
+* `PEPs (Python Enhancement Proposals)`_.
+
+Support for the following sources is planned:
+
+* Inline documentation from Python modules and packages, extracted
+  with namespace context.  **This is the focus of the current
+  development effort.**
+
+* Email (RFC-822 headers, quoted excerpts, signatures, MIME parts).
+
+* Wikis, with global reference lookups of "wiki links".
+
+* Compound documents, such as multiple chapter files merged into a
+  book.
+
+* And others as discovered.
+
+.. _PEPs (Python Enhancement Proposals):
+   http://www.python.org/peps/pep-0012.html
+
+
+Releases & Snapshots
+====================
+
+Putting together an official "Release" of Docutils is a significant
+effort, so it isn't done that often.  In the meantime, the CVS
+snapshots always contain the latest code and documentation, usually
+updated within an hour of changes being committed to the repository,
+and usually bug-free:
+
+* Snapshot of Docutils code, front-end tools, tests, documentation,
+  and specifications: http://docutils.sf.net/docutils-snapshot.tgz
+
+* Snapshot of the Sandbox (experimental, contributed code):
+  http://docutils.sf.net/docutils-sandbox-snapshot.tgz
+
+* Snapshot of web files (the files that generate the web site):
+  http://docutils.sf.net/docutils-web-snapshot.tgz
+
+To keep up to date on the latest developments, download fresh copies
+of the snapshots regularly.  New functionality is being added weekly,
+sometimes daily.  (There's also the CVS repository, and a mailing list
+for CVS messages.  See the web site [address above] or spec/notes.txt
+for details.)
+
+
+Requirements
+============
+
+To run the code, Python 2.2 or later [#py21]_ must already be
+installed.  The latest release is recommended (2.2.2 as of this
+writing).  Python is available from http://www.python.org/.
+
+.. [#py21] Python 2.1 may be used providing the compiler package is
+   installed.  The compiler package can be found in the Tools/
+   directory of Python 2.1's source distribution.
+
+
+Project Files & Directories
+===========================
+
+* README.txt: You're reading it.
+
+* COPYING.txt: Public Domain Dedication and copyright details for
+  non-public-domain files (most are PD).
+
+* FAQ.txt: Docutils Frequently Asked Questions.
+
+* HISTORY.txt: Release notes for the current and previous project
+  releases.
+
+* setup.py: Installation script.  See "Installation" below.
+
+* install.py: Quick & dirty installation script.  Just run it.
+
+* docutils: The project source directory, installed as a Python
+  package.
+
+* extras: Directory for third-party modules that Docutils depends on.
+  These are only installed if they're not already present.
+
+* docs: The project user documentation directory.  Contains the
+  following documents:
+
+  - docs/tools.txt: Docutils Front-End Tools
+  - docs/latex.txt: Docutils LaTeX Writer
+  - docs/rst/quickstart.txt: A ReStructuredText Primer
+  - docs/rst/quickref.html: Quick reStructuredText (HTML only)
+
+* licenses: Directory containing copies of license files for
+  non-public-domain files.
+
+* spec: The project specification directory.  Contains PEPs (Python
+  Enhancement Proposals), XML DTDs (document type definitions), and
+  other documents.  The ``spec/rst`` directory contains the
+  reStructuredText specification.  The ``spec/howto`` directory
+  contains How-To documents for developers.
+
+* tools: Directory for Docutils front-end tools.  See docs/tools.txt
+  for documentation.
+
+* test: Unit tests.  Not required to use the software, but very useful
+  if you're planning to modify it.  See `Running the Test Suite`_
+  below.
+
+
+Installation
+============
+
+The first step is to expand the ``.tar.gz`` or ``.tgz`` archive.  It
+contains a distutils setup file "setup.py".  OS-specific installation
+instructions follow.
+
+
+GNU/Linux, BSDs, Unix, Mac OS X, etc.
+-------------------------------------
+
+1. Open a shell.
+
+2. Go to the directory created by expanding the archive::
+
+       cd <archive_directory_path>
+
+3. Install the package::
+
+       python setup.py install
+
+   If the python executable isn't on your path, you'll have to specify
+   the complete path, such as /usr/local/bin/python.  You may need
+   root permissions to complete this step.
+
+You can also just run install.py; it does the same thing.
+
+
+Windows
+-------
+
+1. Open a DOS box (Command Shell, MSDOS Prompt, or whatever they're
+   calling it these days).
+
+2. Go to the directory created by expanding the archive::
+
+       cd <archive_directory_path>
+
+3. Install the package::
+
+       <path_to_python.exe>\python setup.py install
+
+If your system is set up to run Python when you double-click on .py
+files, you can run install.py to do the same as the above.
+
+
+Mac OS 8/9
+----------
+
+1. Open the folder containing the expanded archive.
+
+2. Double-click on the file "setup.py", which should be a "Python
+   module" file.
+
+   If the file isn't a "Python module", the line endings are probably
+   also wrong, and you will need to set up your system to recognize
+   ".py" file extensions as Python files.  See
+   http://gotools.sourceforge.net/mac/python.html for detailed
+   instructions.  Once set up, it's easiest to start over by expanding
+   the archive again.
+
+3. The distutils options window will appear.  From the "Command" popup
+   list choose "install", click "Add", then click "OK".
+
+If install.py is a "Python module" (see step 2 above if it isn't), you
+can run it (double-click) instead of the above.  The distutils options
+window will not appear.
+
+
+Usage
+=====
+
+After unpacking and installing the Docutils package, the following
+shell commands will generate HTML for all included documentation::
+
+    cd <archive_directory_path>/tools
+    buildhtml.py ../
+
+The final directory name of the ``<archive_directory_path>`` is
+"docutils" for snapshots.  For official releases, the directory may be
+called "docutils-X.Y", where "X.Y" is the release version.
+Alternatively::
+
+    cd <archive_directory_path>
+    tools/buildhtml.py --config=tools/docutils.conf
+
+Some files may generate system messages (warnings and errors).  The
+``tools/test.txt`` file (under the archive directory) contains 5
+intentional errors.  (They test the error reporting mechanism!)
+
+There are many front-end tools in the unpacked "tools" subdirectory.
+You may want to begin with the "html.py" front-end tool.  Most tools
+take up to two arguments, the source path and destination path, with
+STDIN and STDOUT being the defaults.  Use the "--help" option to the
+front-end tools for details on options and arguments.  See `Docutils
+Front-End Tools`_ (``docs/tools.txt``) for full documentation.
+
+The package modules are continually growing and evolving.  The
+``docutils.statemachine`` module is usable independently.  It contains
+extensive inline documentation (in reStructuredText format of course).
+
+Contributions are welcome!
+
+.. _Docutils Front-End Tools: docs/tools.html
+
+
+Running the Test Suite
+======================
+
+To run the entire test suite, after installation_ open a shell and use
+the following commands::
+
+    cd <archive_directory_path>/test
+    ./alltests.py
+
+You should see a long line of periods, one for each test, and then a
+summary like this::
+
+    Ran 518 tests in 24.653s
+
+    OK
+    Elapsed time: 26.189 seconds
+
+The number of tests will grow over time, and the times reported will
+depend on the computer running the tests.  The difference between the
+two times represents the time required to set up the tests (import
+modules, create data structures, etc.).
+
+If any of the tests fail, please `open a bug report`_ or `send
+email`_.  Please include all relevant output, information about your
+operating system, Python version, and Docutils version.  To see the
+Docutils version, use these commands::
+
+    cd ../tools
+    ./quicktest.py --version
+
+.. _open a bug report:
+   http://sourceforge.net/tracker/?group_id=38414&atid=422030
+.. _send email: mailto:address@hidden
+   ?subject=Docutils%20test%20suite%20failure
+
+
+Getting Help
+============
+
+If you have questions or need assistance with Docutils or
+reStructuredText, please `post a message`_ to the `Docutils-Users
+mailing list`_.
+
+.. _post a message: mailto:address@hidden
+.. _Docutils-Users mailing list:
+   http://lists.sourceforge.net/lists/listinfo/docutils-users
+
+
+..
+   Local Variables:
+   mode: indented-text
+   indent-tabs-mode: nil
+   sentence-end-double-space: t
+   fill-column: 70
+   End:

Added: trunk/www/utils/helpers/docutils/__init__.py
===================================================================

Added: trunk/www/utils/helpers/docutils/docutils/__init__.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/__init__.py       2004-03-07 
00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/__init__.py       2004-03-07 
06:27:44 UTC (rev 5249)
@@ -0,0 +1,128 @@
+# Author: David Goodger
+# Contact: address@hidden
+# Revision: $Revision: 1.24 $
+# Date: $Date: 2003/06/25 01:47:04 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+This is the Docutils (Python Documentation Utilities) package.
+
+Package Structure
+=================
+
+Modules:
+
+- __init__.py: Contains the package docstring only (this text).
+
+- core.py: Contains the ``Publisher`` class and ``publish()`` convenience
+  function.
+
+- frontend.py: Command-line and common processing for Docutils front-ends.
+
+- io.py: Provides a uniform API for low-level input and output.
+
+- nodes.py: Docutils document tree (doctree) node class library.
+
+- statemachine.py: A finite state machine specialized for
+  regular-expression-based text filters.
+
+- urischemes.py: Contains a complete mapping of known URI addressing
+  scheme names to descriptions.
+
+- utils.py: Contains the ``Reporter`` system warning class and miscellaneous
+  utilities.
+
+Subpackages:
+
+- languages: Language-specific mappings of terms.
+
+- parsers: Syntax-specific input parser modules or packages.
+
+- readers: Context-specific input handlers which understand the data
+  source and manage a parser.
+
+- transforms: Modules used by readers and writers to modify DPS
+  doctrees.
+
+- writers: Format-specific output translators.
+"""
+
+__docformat__ = 'reStructuredText'
+
+__version__ = '0.3.0'
+"""``major.minor.micro`` version number.  The micro number is bumped any time
+there's a change in the API incompatible with one of the front ends.  The
+minor number is bumped whenever there is a project release.  The major number
+will be bumped when the project is feature-complete, and perhaps if there is a
+major change in the design."""
+
+
+class ApplicationError(StandardError): pass
+class DataError(ApplicationError): pass
+
+
+class SettingsSpec:
+
+    """
+    Runtime setting specification base class.
+
+    SettingsSpec subclass objects used by `docutils.frontend.OptionParser`.
+    """
+
+    settings_spec = ()
+    """Runtime settings specification.  Override in subclasses.
+
+    Specifies runtime settings and associated command-line options, as used by
+    `docutils.frontend.OptionParser`.  This tuple contains one or more sets of
+    option group title, description, and a list/tuple of tuples: ``('help
+    text', [list of option strings], {keyword arguments})``.  Group title
+    and/or description may be `None`; no group title implies no group, just a
+    list of single options.  Runtime settings names are derived implicitly
+    from long option names ("--a-setting" becomes ``settings.a_setting``) or
+    explicitly from the "dest" keyword argument."""
+
+    settings_defaults = None
+    """A dictionary of defaults for internal or inaccessible (by command-line
+    or config file) settings.  Override in subclasses."""
+
+    settings_default_overrides = None
+    """A dictionary of auxiliary defaults, to override defaults for settings
+    defined in other components.  Override in subclasses."""
+
+    relative_path_settings = ()
+    """Settings containing filesystem paths.  Override in subclasses.
+
+    Settings listed here are to be interpreted relative to the current working
+    directory."""
+
+
+class TransformSpec:
+
+    """
+    Runtime transform specification base class.
+
+    TransformSpec subclass objects used by `docutils.transforms.Transformer`.
+    """
+
+    default_transforms = ()
+    """Transforms required by this class.  Override in subclasses."""
+
+
+class Component(SettingsSpec, TransformSpec):
+
+    """Base class for Docutils components."""
+
+    component_type = None
+    """Override in subclasses."""
+
+    supported = ()
+    """Names for this component.  Override in subclasses."""
+
+    def supports(self, format):
+        """
+        Is `format` supported by this component?
+
+        To be used by transforms to ask the dependent component if it supports
+        a certain input context or output format.
+        """
+        return format in self.supported

Added: trunk/www/utils/helpers/docutils/docutils/core.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/core.py   2004-03-07 00:58:55 UTC 
(rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/core.py   2004-03-07 06:27:44 UTC 
(rev 5249)
@@ -0,0 +1,336 @@
+# Authors: David Goodger
+# Contact: address@hidden
+# Revision: $Revision: 1.32 $
+# Date: $Date: 2003/06/16 17:30:20 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+Calling the ``publish_*`` convenience functions (or instantiating a
+`Publisher` object) with component names will result in default
+behavior.  For custom behavior (setting component options), create
+custom component objects first, and pass *them* to
+``publish_*``/`Publisher`.
+"""
+
+__docformat__ = 'reStructuredText'
+
+import sys
+from docutils import Component
+from docutils import frontend, io, readers, parsers, writers
+from docutils.frontend import OptionParser, ConfigParser
+
+
+class Publisher:
+
+    """
+    A facade encapsulating the high-level logic of a Docutils system.
+    """
+
+    def __init__(self, reader=None, parser=None, writer=None,
+                 source=None, source_class=io.FileInput,
+                 destination=None, destination_class=io.FileOutput,
+                 settings=None):
+        """
+        Initial setup.  If any of `reader`, `parser`, or `writer` are not
+        specified, the corresponding ``set_...`` method should be called with
+        a component name (`set_reader` sets the parser as well).
+        """
+
+        self.reader = reader
+        """A `readers.Reader` instance."""
+
+        self.parser = parser
+        """A `parsers.Parser` instance."""
+
+        self.writer = writer
+        """A `writers.Writer` instance."""
+
+        self.source = source
+        """The source of input data, an `io.Input` instance."""
+
+        self.source_class = source_class
+        """The class for dynamically created source objects."""
+
+        self.destination = destination
+        """The destination for docutils output, an `io.Output` instance."""
+
+        self.destination_class = destination_class
+        """The class for dynamically created destination objects."""
+
+        self.settings = settings
+        """An object containing Docutils settings as instance attributes.
+        Set by `self.process_command_line()` or `self.get_settings()`."""
+
+    def set_reader(self, reader_name, parser, parser_name):
+        """Set `self.reader` by name."""
+        reader_class = readers.get_reader_class(reader_name)
+        self.reader = reader_class(parser, parser_name)
+        self.parser = self.reader.parser
+
+    def set_writer(self, writer_name):
+        """Set `self.writer` by name."""
+        writer_class = writers.get_writer_class(writer_name)
+        self.writer = writer_class()
+
+    def set_components(self, reader_name, parser_name, writer_name):
+        if self.reader is None:
+            self.set_reader(reader_name, self.parser, parser_name)
+        if self.parser is None:
+            if self.reader.parser is None:
+                self.reader.set_parser(parser_name)
+            self.parser = self.reader.parser
+        if self.writer is None:
+            self.set_writer(writer_name)
+
+    def setup_option_parser(self, usage=None, description=None,
+                            settings_spec=None, **defaults):
+        #@@@ Add self.source & self.destination to components in future?
+        option_parser = OptionParser(
+            components=(settings_spec, self.parser, self.reader, self.writer),
+            defaults=defaults, read_config_files=1,
+            usage=usage, description=description)
+        return option_parser
+
+    def get_settings(self, usage=None, description=None,
+                     settings_spec=None, **defaults):
+        """
+        Set and return default settings (overrides in `defaults` keyword
+        argument).
+
+        Set components first (`self.set_reader` & `self.set_writer`).
+        Explicitly setting `self.settings` disables command line option
+        processing from `self.publish()`.
+        """
+        option_parser = self.setup_option_parser(usage, description,
+                                                 settings_spec, **defaults)
+        self.settings = option_parser.get_default_values()
+        return self.settings
+
+    def process_command_line(self, argv=None, usage=None, description=None,
+                             settings_spec=None, **defaults):
+        """
+        Pass an empty list to `argv` to avoid reading `sys.argv` (the
+        default).
+
+        Set components first (`self.set_reader` & `self.set_writer`).
+        """
+        option_parser = self.setup_option_parser(usage, description,
+                                                 settings_spec, **defaults)
+        if argv is None:
+            argv = sys.argv[1:]
+        self.settings = option_parser.parse_args(argv)
+
+    def set_io(self, source_path=None, destination_path=None):
+        if self.source is None:
+            self.set_source(source_path=source_path)
+        if self.destination is None:
+            self.set_destination(destination_path=destination_path)
+
+    def set_source(self, source=None, source_path=None):
+        if source_path is None:
+            source_path = self.settings._source
+        else:
+            self.settings._source = source_path
+        self.source = self.source_class(
+            source=source, source_path=source_path,
+            encoding=self.settings.input_encoding)
+
+    def set_destination(self, destination=None, destination_path=None):
+        if destination_path is None:
+            destination_path = self.settings._destination
+        else:
+            self.settings._destination = destination_path
+        self.destination = self.destination_class(
+            destination=destination, destination_path=destination_path,
+            encoding=self.settings.output_encoding,
+            error_handler=self.settings.output_encoding_error_handler)
+
+    def apply_transforms(self, document):
+        document.transformer.populate_from_components(
+            (self.source, self.reader, self.reader.parser, self.writer,
+             self.destination))
+        document.transformer.apply_transforms()
+
+    def publish(self, argv=None, usage=None, description=None,
+                settings_spec=None, settings_overrides=None,
+                enable_exit=None):
+        """
+        Process command line options and arguments (if `self.settings` not
+        already set), run `self.reader` and then `self.writer`.  Return
+        `self.writer`'s output.
+        """
+        if self.settings is None:
+            self.process_command_line(argv, usage, description, settings_spec,
+                                      **(settings_overrides or {}))
+        elif settings_overrides:
+            self.settings._update(settings_overrides, 'loose')
+        self.set_io()
+        document = self.reader.read(self.source, self.parser, self.settings)
+        self.apply_transforms(document)
+        output = self.writer.write(document, self.destination)
+        if self.settings.dump_settings:
+            from pprint import pformat
+            print >>sys.stderr, '\n::: Runtime settings:'
+            print >>sys.stderr, pformat(self.settings.__dict__)
+        if self.settings.dump_internals:
+            from pprint import pformat
+            print >>sys.stderr, '\n::: Document internals:'
+            print >>sys.stderr, pformat(document.__dict__)
+        if self.settings.dump_transforms:
+            from pprint import pformat
+            print >>sys.stderr, '\n::: Transforms applied:'
+            print >>sys.stderr, pformat(document.transformer.applied)
+        if self.settings.dump_pseudo_xml:
+            print >>sys.stderr, '\n::: Pseudo-XML:'
+            print >>sys.stderr, document.pformat().encode(
+                'raw_unicode_escape')
+        if enable_exit and (document.reporter.max_level
+                            >= self.settings.exit_level):
+            sys.exit(document.reporter.max_level + 10)
+        return output
+
+
+default_usage = '%prog [options] [<source> [<destination>]]'
+default_description = ('Reads from <source> (default is stdin) and writes to '
+                       '<destination> (default is stdout).')
+
+def publish_cmdline(reader=None, reader_name='standalone',
+                    parser=None, parser_name='restructuredtext',
+                    writer=None, writer_name='pseudoxml',
+                    settings=None, settings_spec=None,
+                    settings_overrides=None, enable_exit=1, argv=None,
+                    usage=default_usage, description=default_description):
+    """
+    Set up & run a `Publisher`.  For command-line front ends.
+
+    Parameters:
+
+    - `reader`: A `docutils.readers.Reader` object.
+    - `reader_name`: Name or alias of the Reader class to be instantiated if
+      no `reader` supplied.
+    - `parser`: A `docutils.parsers.Parser` object.
+    - `parser_name`: Name or alias of the Parser class to be instantiated if
+      no `parser` supplied.
+    - `writer`: A `docutils.writers.Writer` object.
+    - `writer_name`: Name or alias of the Writer class to be instantiated if
+      no `writer` supplied.
+    - `settings`: Runtime settings object.
+    - `settings_spec`: Extra settings specification; a `docutils.SettingsSpec`
+      subclass.  Used only if no `settings` specified.
+    - `settings_overrides`: A dictionary containing program-specific overrides
+      of component settings.
+    - `enable_exit`: Boolean; enable exit status at end of processing?
+    - `argv`: Command-line argument list to use instead of ``sys.argv[1:]``.
+    - `usage`: Usage string, output if there's a problem parsing the command
+      line.
+    - `description`: Program description, output for the "--help" option
+      (along with command-line option descriptions).
+    """
+    pub = Publisher(reader, parser, writer, settings=settings)
+    pub.set_components(reader_name, parser_name, writer_name)
+    pub.publish(argv, usage, description, settings_spec, settings_overrides,
+                enable_exit=enable_exit)
+
+def publish_file(source=None, source_path=None,
+                 destination=None, destination_path=None,
+                 reader=None, reader_name='standalone',
+                 parser=None, parser_name='restructuredtext',
+                 writer=None, writer_name='pseudoxml',
+                 settings=None, settings_spec=None, settings_overrides=None,
+                 enable_exit=None):
+    """
+    Set up & run a `Publisher`.  For programmatic use with file-like I/O.
+
+    Parameters:
+
+    - `source`: A file-like object (must have "read" and "close" methods).
+    - `source_path`: Path to the input file.  Opened if no `source` supplied.
+      If neither `source` nor `source_path` are supplied, `sys.stdin` is used.
+    - `destination`: A file-like object (must have "write" and "close"
+      methods).
+    - `destination_path`: Path to the input file.  Opened if no `destination`
+      supplied.  If neither `destination` nor `destination_path` are supplied,
+      `sys.stdout` is used.
+    - `reader`: A `docutils.readers.Reader` object.
+    - `reader_name`: Name or alias of the Reader class to be instantiated if
+      no `reader` supplied.
+    - `parser`: A `docutils.parsers.Parser` object.
+    - `parser_name`: Name or alias of the Parser class to be instantiated if
+      no `parser` supplied.
+    - `writer`: A `docutils.writers.Writer` object.
+    - `writer_name`: Name or alias of the Writer class to be instantiated if
+      no `writer` supplied.
+    - `settings`: Runtime settings object.
+    - `settings_spec`: Extra settings specification; a `docutils.SettingsSpec`
+      subclass.  Used only if no `settings` specified.
+    - `settings_overrides`: A dictionary containing program-specific overrides
+      of component settings.
+    - `enable_exit`: Boolean; enable exit status at end of processing?
+    """
+    pub = Publisher(reader, parser, writer, settings=settings)
+    pub.set_components(reader_name, parser_name, writer_name)
+    if settings is None:
+        settings = pub.get_settings(settings_spec=settings_spec)
+    if settings_overrides:
+        settings._update(settings_overrides, 'loose')
+    pub.set_source(source, source_path)
+    pub.set_destination(destination, destination_path)
+    pub.publish(enable_exit=enable_exit)
+
+def publish_string(source, source_path=None, destination_path=None, 
+                   reader=None, reader_name='standalone',
+                   parser=None, parser_name='restructuredtext',
+                   writer=None, writer_name='pseudoxml',
+                   settings=None, settings_spec=None,
+                   settings_overrides=None, enable_exit=None):
+    """
+    Set up & run a `Publisher`, and return the string output.
+    For programmatic use with string I/O.
+
+    For encoded string output, be sure to set the "output_encoding" setting to
+    the desired encoding.  Set it to "unicode" for unencoded Unicode string
+    output.  Here's how::
+
+        publish_string(..., settings_overrides={'output_encoding': 'unicode'})
+
+    Similarly for Unicode string input (`source`)::
+
+        publish_string(..., settings_overrides={'input_encoding': 'unicode'})
+
+    Parameters:
+
+    - `source`: An input string; required.  This can be an encoded 8-bit
+      string (set the "input_encoding" setting to the correct encoding) or a
+      Unicode string (set the "input_encoding" setting to "unicode").
+    - `source_path`: Path to the file or object that produced `source`;
+      optional.  Only used for diagnostic output.
+    - `destination_path`: Path to the file or object which will receive the
+      output; optional.  Used for determining relative paths (stylesheets,
+      source links, etc.).
+    - `reader`: A `docutils.readers.Reader` object.
+    - `reader_name`: Name or alias of the Reader class to be instantiated if
+      no `reader` supplied.
+    - `parser`: A `docutils.parsers.Parser` object.
+    - `parser_name`: Name or alias of the Parser class to be instantiated if
+      no `parser` supplied.
+    - `writer`: A `docutils.writers.Writer` object.
+    - `writer_name`: Name or alias of the Writer class to be instantiated if
+      no `writer` supplied.
+    - `settings`: Runtime settings object.
+    - `settings_spec`: Extra settings specification; a `docutils.SettingsSpec`
+      subclass.  Used only if no `settings` specified.
+    - `settings_overrides`: A dictionary containing program-specific overrides
+      of component settings.
+    - `enable_exit`: Boolean; enable exit status at end of processing?
+    """
+    pub = Publisher(reader, parser, writer, settings=settings,
+                    source_class=io.StringInput,
+                    destination_class=io.StringOutput)
+    pub.set_components(reader_name, parser_name, writer_name)
+    if settings is None:
+        settings = pub.get_settings(settings_spec=settings_spec)
+    if settings_overrides:
+        settings._update(settings_overrides, 'loose')
+    pub.set_source(source, source_path)
+    pub.set_destination(destination_path=destination_path)
+    return pub.publish(enable_exit=enable_exit)

Added: trunk/www/utils/helpers/docutils/docutils/frontend.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/frontend.py       2004-03-07 
00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/frontend.py       2004-03-07 
06:27:44 UTC (rev 5249)
@@ -0,0 +1,471 @@
+# Author: David Goodger
+# Contact: address@hidden
+# Revision: $Revision: 1.38 $
+# Date: $Date: 2003/06/16 17:30:48 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+Command-line and common processing for Docutils front-end tools.
+
+Exports the following classes:
+
+- `OptionParser`: Standard Docutils command-line processing.
+- `Values`: Runtime settings; objects are simple structs
+  (``object.attribute``).
+- `ConfigParser`: Standard Docutils config file processing.
+"""
+
+__docformat__ = 'reStructuredText'
+
+import os
+import os.path
+import sys
+import types
+import ConfigParser as CP
+import codecs
+import docutils
+import optparse
+from optparse import Values, SUPPRESS_HELP
+
+
+def store_multiple(option, opt, value, parser, *args, **kwargs):
+    """
+    Store multiple values in `parser.values`.  (Option callback.)
+    
+    Store `None` for each attribute named in `args`, and store the value for
+    each key (attribute name) in `kwargs`.
+    """
+    for attribute in args:
+        setattr(parser.values, attribute, None)
+    for key, value in kwargs.items():
+        setattr(parser.values, key, value)
+
+def read_config_file(option, opt, value, parser):
+    """
+    Read a configuration file during option processing.  (Option callback.)
+    """
+    config_parser = ConfigParser()
+    config_parser.read(value, parser)
+    settings = config_parser.get_section('options')
+    make_paths_absolute(settings, parser.relative_path_settings,
+                        os.path.dirname(value))
+    parser.values.__dict__.update(settings)
+
+def set_encoding(option, opt, value, parser):
+    """
+    Validate & set the encoding specified.  (Option callback.)
+    """
+    try:
+        value = validate_encoding(option.dest, value)
+    except LookupError, error:
+        raise (optparse.OptionValueError('option "%s": %s' % (opt, error)),
+               None, sys.exc_info()[2])
+    setattr(parser.values, option.dest, value)
+
+def validate_encoding(name, value):
+    try:
+        codecs.lookup(value)
+    except LookupError:
+        raise (LookupError('unknown encoding: "%s"' % value),
+               None, sys.exc_info()[2])
+    return value
+
+def set_encoding_error_handler(option, opt, value, parser):
+    """
+    Validate & set the encoding error handler specified.  (Option callback.)
+    """
+    try:
+        value = validate_encoding_error_handler(option.dest, value)
+    except LookupError, error:
+        raise (optparse.OptionValueError('option "%s": %s' % (opt, error)),
+               None, sys.exc_info()[2])
+    setattr(parser.values, option.dest, value)
+
+def validate_encoding_error_handler(name, value):
+    try:
+        codecs.lookup_error(value)
+    except AttributeError:              # prior to Python 2.3
+        if value not in ('strict', 'ignore', 'replace'):
+            raise (LookupError(
+                'unknown encoding error handler: "%s" (choices: '
+                '"strict", "ignore", or "replace")' % value),
+                   None, sys.exc_info()[2])
+    except LookupError:
+        raise (LookupError(
+            'unknown encoding error handler: "%s" (choices: '
+            '"strict", "ignore", "replace", "backslashreplace", '
+            '"xmlcharrefreplace", and possibly others; see documentation for '
+            'the Python ``codecs`` module)' % value),
+               None, sys.exc_info()[2])
+    return value
+
+def set_encoding_and_error_handler(option, opt, value, parser):
+    """
+    Validate & set the encoding and error handler specified.  (Option 
callback.)
+    """
+    try:
+        value = validate_encoding_and_error_handler(option.dest, value)
+    except LookupError, error:
+        raise (optparse.OptionValueError('option "%s": %s' % (opt, error)),
+               None, sys.exc_info()[2])
+    if ':' in value:
+        encoding, handler = value.split(':')
+        setattr(parser.values, option.dest + '_error_handler', handler)
+    else:
+        encoding = value
+    setattr(parser.values, option.dest, encoding)
+
+def validate_encoding_and_error_handler(name, value):
+    if ':' in value:
+        encoding, handler = value.split(':')
+        validate_encoding_error_handler(name + '_error_handler', handler)
+    else:
+        encoding = value
+    validate_encoding(name, encoding)
+    return value
+
+def make_paths_absolute(pathdict, keys, base_path=None):
+    """
+    Interpret filesystem path settings relative to the `base_path` given.
+
+    Paths are values in `pathdict` whose keys are in `keys`.  Get `keys` from
+    `OptionParser.relative_path_settings`.
+    """
+    if base_path is None:
+        base_path = os.getcwd()
+    for key in keys:
+        if pathdict.has_key(key) and pathdict[key]:
+            pathdict[key] = os.path.normpath(
+                os.path.abspath(os.path.join(base_path, pathdict[key])))
+
+
+class OptionParser(optparse.OptionParser, docutils.SettingsSpec):
+
+    """
+    Parser for command-line and library use.  The `settings_spec`
+    specification here and in other Docutils components are merged to build
+    the set of command-line options and runtime settings for this process.
+
+    Common settings (defined below) and component-specific settings must not
+    conflict.  Short options are reserved for common settings, and components
+    are restrict to using long options.
+    """
+
+    threshold_choices = 'info 1 warning 2 error 3 severe 4 none 5'.split()
+    """Possible inputs for for --report and --halt threshold values."""
+
+    thresholds = {'info': 1, 'warning': 2, 'error': 3, 'severe': 4, 'none': 5}
+    """Lookup table for --report and --halt threshold values."""
+
+    if hasattr(codecs, 'backslashreplace_errors'):
+        default_error_encoding_error_handler = 'backslashreplace'
+    else:
+        default_error_encoding_error_handler = 'replace'
+
+    settings_spec = (
+        'General Docutils Options',
+        None,
+        (('Include a "Generated by Docutils" credit and link at the end '
+          'of the document.',
+          ['--generator', '-g'], {'action': 'store_true'}),
+         ('Do not include a generator credit.',
+          ['--no-generator'], {'action': 'store_false', 'dest': 'generator'}),
+         ('Include the date at the end of the document (UTC).',
+          ['--date', '-d'], {'action': 'store_const', 'const': '%Y-%m-%d',
+                             'dest': 'datestamp'}),
+         ('Include the time & date at the end of the document (UTC).',
+          ['--time', '-t'], {'action': 'store_const',
+                             'const': '%Y-%m-%d %H:%M UTC',
+                             'dest': 'datestamp'}),
+         ('Do not include a datestamp of any kind.',
+          ['--no-datestamp'], {'action': 'store_const', 'const': None,
+                               'dest': 'datestamp'}),
+         ('Include a "View document source" link (relative to destination).',
+          ['--source-link', '-s'], {'action': 'store_true'}),
+         ('Use the supplied <URL> verbatim for a "View document source" '
+          'link; implies --source-link.',
+          ['--source-url'], {'metavar': '<URL>'}),
+         ('Do not include a "View document source" link.',
+          ['--no-source-link'],
+          {'action': 'callback', 'callback': store_multiple,
+           'callback_args': ('source_link', 'source_url')}),
+         ('Enable backlinks from section headers to table of contents '
+          'entries.  This is the default.',
+          ['--toc-entry-backlinks'],
+          {'dest': 'toc_backlinks', 'action': 'store_const', 'const': 'entry',
+           'default': 'entry'}),
+         ('Enable backlinks from section headers to the top of the table of '
+          'contents.',
+          ['--toc-top-backlinks'],
+          {'dest': 'toc_backlinks', 'action': 'store_const', 'const': 'top'}),
+         ('Disable backlinks to the table of contents.',
+          ['--no-toc-backlinks'],
+          {'dest': 'toc_backlinks', 'action': 'store_false'}),
+         ('Enable backlinks from footnotes and citations to their '
+          'references.  This is the default.',
+          ['--footnote-backlinks'],
+          {'action': 'store_true', 'default': 1}),
+         ('Disable backlinks from footnotes and citations.',
+          ['--no-footnote-backlinks'],
+          {'dest': 'footnote_backlinks', 'action': 'store_false'}),
+         ('Set verbosity threshold; report system messages at or higher than '
+          '<level> (by name or number: "info" or "1", warning/2, error/3, '
+          'severe/4; also, "none" or "5").  Default is 2 (warning).',
+          ['--report', '-r'], {'choices': threshold_choices, 'default': 2,
+                               'dest': 'report_level', 'metavar': '<level>'}),
+         ('Report all system messages, info-level and higher.  (Same as '
+          '"--report=info".)',
+          ['--verbose', '-v'], {'action': 'store_const', 'const': 'info',
+                                'dest': 'report_level'}),
+         ('Do not report any system messages.  (Same as "--report=none".)',
+          ['--quiet', '-q'], {'action': 'store_const', 'const': 'none',
+                              'dest': 'report_level'}),
+         ('Set the threshold (<level>) at or above which system messages are '
+          'converted to exceptions, halting execution immediately.  Levels '
+          'as in --report.  Default is 4 (severe).',
+          ['--halt'], {'choices': threshold_choices, 'dest': 'halt_level',
+                       'default': 4, 'metavar': '<level>'}),
+         ('Same as "--halt=info": halt processing at the slightest problem.',
+          ['--strict'], {'action': 'store_const', 'const': 'info',
+                         'dest': 'halt_level'}),
+         ('Enable a non-zero exit status for normal exit if non-halting '
+          'system messages (at or above <level>) were generated.  Levels as '
+          'in --report.  Default is 5 (disabled).  Exit status is the maximum '
+          'system message level plus 10 (11 for INFO, etc.).',
+          ['--exit'], {'choices': threshold_choices, 'dest': 'exit_level',
+                       'default': 5, 'metavar': '<level>'}),
+         ('Report debug-level system messages.',
+          ['--debug'], {'action': 'store_true'}),
+         ('Do not report debug-level system messages.',
+          ['--no-debug'], {'action': 'store_false', 'dest': 'debug'}),
+         ('Send the output of system messages (warnings) to <file>.',
+          ['--warnings'], {'dest': 'warning_stream', 'metavar': '<file>'}),
+         ('Specify the encoding of input text.  Default is locale-dependent.',
+          ['--input-encoding', '-i'],
+          {'action': 'callback', 'callback': set_encoding,
+           'metavar': '<name>', 'type': 'string', 'dest': 'input_encoding'}),
+         ('Specify the text encoding for output.  Default is UTF-8.  '
+          'Optionally also specify the encoding error handler for unencodable '
+          'characters (see "--error-encoding"); default is "strict".',
+          ['--output-encoding', '-o'],
+          {'action': 'callback', 'callback': set_encoding_and_error_handler,
+           'metavar': '<name[:handler]>', 'type': 'string',
+           'dest': 'output_encoding', 'default': 'utf-8'}),
+         (SUPPRESS_HELP,                # usually handled by --output-encoding
+          ['--output_encoding_error_handler'],
+          {'action': 'callback', 'callback': set_encoding_error_handler,
+           'type': 'string', 'dest': 'output_encoding_error_handler',
+           'default': 'strict'}),
+         ('Specify the text encoding for error output.  Default is ASCII.  '
+          'Optionally also specify the encoding error handler for unencodable '
+          'characters, after a colon (":").  Acceptable values are the same '
+          'as for the "error" parameter of Python\'s ``encode`` string '
+          'method.  Default is "%s".' % default_error_encoding_error_handler,
+          ['--error-encoding', '-e'],
+          {'action': 'callback', 'callback': set_encoding_and_error_handler,
+           'metavar': '<name[:handler]>', 'type': 'string',
+           'dest': 'error_encoding', 'default': 'ascii'}),
+         (SUPPRESS_HELP,                # usually handled by --error-encoding
+          ['--error_encoding_error_handler'],
+          {'action': 'callback', 'callback': set_encoding_error_handler,
+           'type': 'string', 'dest': 'error_encoding_error_handler',
+           'default': default_error_encoding_error_handler}),
+         ('Specify the language of input text (ISO 639 2-letter identifier).'
+          '  Default is "en" (English).',
+          ['--language', '-l'], {'dest': 'language_code', 'default': 'en',
+                                 'metavar': '<name>'}),
+         ('Read configuration settings from <file>, if it exists.',
+          ['--config'], {'metavar': '<file>', 'type': 'string',
+                         'action': 'callback', 'callback': read_config_file}),
+         ("Show this program's version number and exit.",
+          ['--version', '-V'], {'action': 'version'}),
+         ('Show this help message and exit.',
+          ['--help', '-h'], {'action': 'help'}),
+         # Hidden options, for development use only:
+         (SUPPRESS_HELP, ['--dump-settings'], {'action': 'store_true'}),
+         (SUPPRESS_HELP, ['--dump-internals'], {'action': 'store_true'}),
+         (SUPPRESS_HELP, ['--dump-transforms'], {'action': 'store_true'}),
+         (SUPPRESS_HELP, ['--dump-pseudo-xml'], {'action': 'store_true'}),
+         (SUPPRESS_HELP, ['--expose-internal-attribute'],
+          {'action': 'append', 'dest': 'expose_internals'}),))
+    """Runtime settings and command-line options common to all Docutils front
+    ends.  Setting specs specific to individual Docutils components are also
+    used (see `populate_from_components()`)."""
+
+    settings_defaults = {'_disable_config': None}
+    """Defaults for settings that don't have command-line option 
equivalents."""
+
+    relative_path_settings = ('warning_stream',)
+
+    version_template = '%%prog (Docutils %s)' % docutils.__version__
+    """Default version message."""
+
+    def __init__(self, components=(), defaults=None, read_config_files=None,
+                 *args, **kwargs):
+        """
+        `components` is a list of Docutils components each containing a
+        ``.settings_spec`` attribute.  `defaults` is a mapping of setting
+        default overrides.
+        """
+        optparse.OptionParser.__init__(
+            self, add_help_option=None,
+            formatter=optparse.TitledHelpFormatter(width=78),
+            *args, **kwargs)
+        if not self.version:
+            self.version = self.version_template
+        # Make an instance copy (it will be modified):
+        self.relative_path_settings = list(self.relative_path_settings)
+        self.populate_from_components((self,) + tuple(components))
+        defaults = defaults or {}
+        if read_config_files and not self.defaults['_disable_config']:
+            config = ConfigParser()
+            config.read_standard_files(self)
+            config_settings = config.get_section('options')
+            make_paths_absolute(config_settings, self.relative_path_settings)
+            defaults.update(config_settings)
+        # Internal settings with no defaults from settings specifications;
+        # initialize manually:
+        self.set_defaults(_source=None, _destination=None, **defaults)
+
+    def populate_from_components(self, components):
+        """
+        For each component, first populate from the 
`SettingsSpec.settings_spec`
+        structure, then from the `SettingsSpec.settings_defaults` dictionary.
+        After all components have been processed, check for and populate from
+        each component's `SettingsSpec.settings_default_overrides` dictionary.
+        """
+        for component in components:
+            if component is None:
+                continue
+            i = 0
+            settings_spec = component.settings_spec
+            self.relative_path_settings.extend(
+                component.relative_path_settings)
+            while i < len(settings_spec):
+                title, description, option_spec = settings_spec[i:i+3]
+                if title:
+                    group = optparse.OptionGroup(self, title, description)
+                    self.add_option_group(group)
+                else:
+                    group = self        # single options
+                for (help_text, option_strings, kwargs) in option_spec:
+                    group.add_option(help=help_text, *option_strings,
+                                     **kwargs)
+                if component.settings_defaults:
+                    self.defaults.update(component.settings_defaults)
+                i += 3
+        for component in components:
+            if component and component.settings_default_overrides:
+                self.defaults.update(component.settings_default_overrides)
+
+    def check_values(self, values, args):
+        if hasattr(values, 'report_level'):
+            values.report_level = self.check_threshold(values.report_level)
+        if hasattr(values, 'halt_level'):
+            values.halt_level = self.check_threshold(values.halt_level)
+        if hasattr(values, 'exit_level'):
+            values.exit_level = self.check_threshold(values.exit_level)
+        values._source, values._destination = self.check_args(args)
+        make_paths_absolute(values.__dict__, self.relative_path_settings,
+                            os.getcwd())
+        return values
+
+    def check_threshold(self, level):
+        try:
+            return int(level)
+        except ValueError:
+            try:
+                return self.thresholds[level.lower()]
+            except (KeyError, AttributeError):
+                self.error('Unknown threshold: %r.' % level)
+
+    def check_args(self, args):
+        source = destination = None
+        if args:
+            source = args.pop(0)
+            if source == '-':           # means stdin
+                source = None
+        if args:
+            destination = args.pop(0)
+            if destination == '-':      # means stdout
+                destination = None
+        if args:
+            self.error('Maximum 2 arguments allowed.')
+        if source and source == destination:
+            self.error('Do not specify the same file for both source and '
+                       'destination.  It will clobber the source file.')
+        return source, destination
+
+
+class ConfigParser(CP.ConfigParser):
+
+    standard_config_files = (
+        '/etc/docutils.conf',               # system-wide
+        './docutils.conf',                  # project-specific
+        os.path.expanduser('~/.docutils'))  # user-specific
+    """Docutils configuration files, using ConfigParser syntax (section
+    'options').  Later files override earlier ones."""
+
+    validation = {
+        'options':
+        {'input_encoding': validate_encoding,
+         'output_encoding': validate_encoding,
+         'output_encoding_error_handler': validate_encoding_error_handler,
+         'error_encoding': validate_encoding,
+         'error_encoding_error_handler': validate_encoding_error_handler}}
+    """{section: {option: validation function}} mapping, used by
+    `validate_options`.  Validation functions take two parameters: name and
+    value.  They return a (possibly modified) value, or raise an exception."""
+
+    def read_standard_files(self, option_parser):
+        self.read(self.standard_config_files, option_parser)
+
+    def read(self, filenames, option_parser):
+        if type(filenames) in types.StringTypes:
+            filenames = [filenames]
+        for filename in filenames:
+            CP.ConfigParser.read(self, filename)
+            self.validate_options(filename, option_parser)
+
+    def validate_options(self, filename, option_parser):
+        for section in self.validation.keys():
+            if not self.has_section(section):
+                continue
+            for option in self.validation[section].keys():
+                if self.has_option(section, option):
+                    value = self.get(section, option)
+                    validator = self.validation[section][option]
+                    try:
+                        new_value = validator(option, value)
+                    except Exception, error:
+                        raise (ValueError(
+                            'Error in config file "%s", section "[%s]":\n'
+                            '    %s: %s\n        %s = %s'
+                            % (filename, section, error.__class__.__name__,
+                               error, option, value)), None, sys.exc_info()[2])
+                    self.set(section, option, new_value)
+
+    def optionxform(self, optionstr):
+        """
+        Transform '-' to '_' so the cmdline form of option names can be used.
+        """
+        return optionstr.lower().replace('-', '_')
+
+    def get_section(self, section, raw=0, vars=None):
+        """
+        Return a given section as a dictionary (empty if the section
+        doesn't exist).
+
+        All % interpolations are expanded in the return values, based on the
+        defaults passed into the constructor, unless the optional argument
+        `raw` is true.  Additional substitutions may be provided using the
+        `vars` argument, which must be a dictionary whose contents overrides
+        any pre-existing defaults.
+
+        The section DEFAULT is special.
+        """
+        section_dict = {}
+        if self.has_section(section):
+            for option in self.options(section):
+                section_dict[option] = self.get(section, option, raw, vars)
+        return section_dict

Added: trunk/www/utils/helpers/docutils/docutils/io.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/io.py     2004-03-07 00:58:55 UTC 
(rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/io.py     2004-03-07 06:27:44 UTC 
(rev 5249)
@@ -0,0 +1,274 @@
+# Author: David Goodger
+# Contact: address@hidden
+# Revision: $Revision: 1.26 $
+# Date: $Date: 2003/06/17 13:26:52 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+I/O classes provide a uniform API for low-level input and output.  Subclasses
+will exist for a variety of input/output mechanisms.
+"""
+
+__docformat__ = 'reStructuredText'
+
+import sys
+import locale
+from types import UnicodeType
+from docutils import TransformSpec
+
+
+class Input(TransformSpec):
+
+    """
+    Abstract base class for input wrappers.
+    """
+
+    component_type = 'input'
+
+    default_source_path = None
+
+    def __init__(self, source=None, source_path=None, encoding=None):
+        self.encoding = encoding
+        """Text encoding for the input source."""
+
+        self.source = source
+        """The source of input data."""
+
+        self.source_path = source_path
+        """A text reference to the source."""
+
+        if not source_path:
+            self.source_path = self.default_source_path
+
+    def __repr__(self):
+        return '%s: source=%r, source_path=%r' % (self.__class__, self.source,
+                                                  self.source_path)
+
+    def read(self):
+        raise NotImplementedError
+
+    def decode(self, data):
+        """
+        Decode a string, `data`, heuristically.
+        Raise UnicodeError if unsuccessful.
+
+        The client application should call ``locale.setlocale`` at the
+        beginning of processing::
+
+            locale.setlocale(locale.LC_ALL, '')
+        """
+        if (self.encoding and self.encoding.lower() == 'unicode'
+            or isinstance(data, UnicodeType)):
+            return unicode(data)
+        encodings = [self.encoding, 'utf-8']
+        try:
+            encodings.append(locale.nl_langinfo(locale.CODESET))
+        except:
+            pass
+        try:
+            encodings.append(locale.getlocale()[1])
+        except:
+            pass
+        try:
+            encodings.append(locale.getdefaultlocale()[1])
+        except:
+            pass
+        encodings.append('latin-1')
+        for enc in encodings:
+            if not enc:
+                continue
+            try:
+                return unicode(data, enc)
+            except (UnicodeError, LookupError):
+                pass
+        raise UnicodeError(
+            'Unable to decode input data.  Tried the following encodings: %s.'
+            % ', '.join([repr(enc) for enc in encodings if enc]))
+
+
+class Output(TransformSpec):
+
+    """
+    Abstract base class for output wrappers.
+    """
+
+    component_type = 'output'
+
+    default_destination_path = None
+
+    def __init__(self, destination=None, destination_path=None,
+                 encoding=None, error_handler='strict'):
+        self.encoding = encoding
+        """Text encoding for the output destination."""
+
+        self.error_handler = error_handler or 'strict'
+        """Text encoding error handler."""
+
+        self.destination = destination
+        """The destination for output data."""
+
+        self.destination_path = destination_path
+        """A text reference to the destination."""
+
+        if not destination_path:
+            self.destination_path = self.default_destination_path
+
+    def __repr__(self):
+        return ('%s: destination=%r, destination_path=%r'
+                % (self.__class__, self.destination, self.destination_path))
+
+    def write(self, data):
+        raise NotImplementedError
+
+    def encode(self, data):
+        if self.encoding and self.encoding.lower() == 'unicode':
+            return data
+        else:
+            return data.encode(self.encoding, self.error_handler)
+
+
+class FileInput(Input):
+
+    """
+    Input for single, simple file-like objects.
+    """
+
+    def __init__(self, source=None, source_path=None,
+                 encoding=None, autoclose=1):
+        """
+        :Parameters:
+            - `source`: either a file-like object (which is read directly), or
+              `None` (which implies `sys.stdin` if no `source_path` given).
+            - `source_path`: a path to a file, which is opened and then read.
+            - `autoclose`: close automatically after read (boolean); always
+              false if `sys.stdin` is the source.
+        """
+        Input.__init__(self, source, source_path, encoding)
+        self.autoclose = autoclose
+        if source is None:
+            if source_path:
+                self.source = open(source_path)
+            else:
+                self.source = sys.stdin
+                self.autoclose = None
+        if not source_path:
+            try:
+                self.source_path = self.source.name
+            except AttributeError:
+                pass
+
+    def read(self):
+        """Read and decode a single file and return the data."""
+        data = self.source.read()
+        if self.autoclose:
+            self.close()
+        return self.decode(data)
+
+    def close(self):
+        self.source.close()
+
+
+class FileOutput(Output):
+
+    """
+    Output for single, simple file-like objects.
+    """
+
+    def __init__(self, destination=None, destination_path=None,
+                 encoding=None, error_handler='strict', autoclose=1):
+        """
+        :Parameters:
+            - `destination`: either a file-like object (which is written
+              directly) or `None` (which implies `sys.stdout` if no
+              `destination_path` given).
+            - `destination_path`: a path to a file, which is opened and then
+              written.
+            - `autoclose`: close automatically after write (boolean); always
+              false if `sys.stdout` is the destination.
+        """
+        Output.__init__(self, destination, destination_path,
+                        encoding, error_handler)
+        self.opened = 1
+        self.autoclose = autoclose
+        if destination is None:
+            if destination_path:
+                self.opened = None
+            else:
+                self.destination = sys.stdout
+                self.autoclose = None
+        if not destination_path:
+            try:
+                self.destination_path = self.destination.name
+            except AttributeError:
+                pass
+
+    def open(self):
+        self.destination = open(self.destination_path, 'w')
+        self.opened = 1
+
+    def write(self, data):
+        """Encode `data`, write it to a single file, and return it."""
+        output = self.encode(data)
+        if not self.opened:
+            self.open()
+        self.destination.write(output)
+        if self.autoclose:
+            self.close()
+        return output
+
+    def close(self):
+        self.destination.close()
+        self.opened = None
+
+
+class StringInput(Input):
+
+    """
+    Direct string input.
+    """
+
+    default_source_path = '<string>'
+
+    def read(self):
+        """Decode and return the source string."""
+        return self.decode(self.source)
+
+
+class StringOutput(Output):
+
+    """
+    Direct string output.
+    """
+
+    default_destination_path = '<string>'
+
+    def write(self, data):
+        """Encode `data`, store it in `self.destination`, and return it."""
+        self.destination = self.encode(data)
+        return self.destination
+
+
+class NullInput(Input):
+
+    """
+    Degenerate input: read nothing.
+    """
+
+    default_source_path = 'null input'
+
+    def read(self):
+        """Return a null string."""
+        return u''
+
+
+class NullOutput(Output):
+
+    """
+    Degenerate output: write nothing.
+    """
+
+    default_destination_path = 'null output'
+
+    def write(self, data):
+        """Do nothing ([don't even] send data to the bit bucket)."""
+        pass

Added: trunk/www/utils/helpers/docutils/docutils/languages/__init__.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/languages/__init__.py     
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/languages/__init__.py     
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,20 @@
+# Author: David Goodger
+# Contact: address@hidden
+# Revision: $Revision: 1.4 $
+# Date: $Date: 2002/10/09 00:51:44 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+This package contains modules for language-dependent features of Docutils.
+"""
+
+__docformat__ = 'reStructuredText'
+
+_languages = {}
+
+def get_language(language_code):
+    if _languages.has_key(language_code):
+        return _languages[language_code]
+    module = __import__(language_code, globals(), locals())
+    _languages[language_code] = module
+    return module

Added: trunk/www/utils/helpers/docutils/docutils/languages/de.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/languages/de.py   2004-03-07 
00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/languages/de.py   2004-03-07 
06:27:44 UTC (rev 5249)
@@ -0,0 +1,55 @@
+# Authors:   David Goodger; Gunnar Schwant
+# Contact:   address@hidden
+# Revision:  $Revision: 1.4 $
+# Date:      $Date: 2003/03/27 00:21:20 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+German language mappings for language-dependent features of Docutils.
+"""
+
+__docformat__ = 'reStructuredText'
+
+labels = {
+    'author': 'Autor',
+    'authors': 'Autoren',
+    'organization': 'Organisation',
+    'address': 'Adresse',
+    'contact': 'Kontakt',
+    'version': 'Version',
+    'revision': 'Revision',
+    'status': 'Status',
+    'date': 'Datum',
+    'dedication': 'Widmung',
+    'copyright': 'Copyright',
+    'abstract': 'Zusammenfassung',
+    'attention': 'Achtung!',
+    'caution': 'Vorsicht!',
+    'danger': '!GEFAHR!',
+    'error': 'Fehler',
+    'hint': 'Hinweis',
+    'important': 'Wichtig',
+    'note': 'Bemerkung',
+    'tip': 'Tipp',
+    'warning': 'Warnung',
+    'contents': 'Inhalt'}
+"""Mapping of node class name to label text."""
+
+bibliographic_fields = {
+    'autor': 'author',
+    'autoren': 'authors',
+    'organisation': 'organization',
+    'adresse': 'address',
+    'kontakt': 'contact',
+    'version': 'version',
+    'revision': 'revision',
+    'status': 'status',
+    'datum': 'date',
+    'copyright': 'copyright',
+    'widmung': 'dedication',
+    'zusammenfassung': 'abstract'}
+"""German (lowcased) to canonical name mapping for bibliographic fields."""
+
+author_separators = [';', ',']
+"""List of separator strings for the 'Authors' bibliographic field. Tried in
+order."""

Added: trunk/www/utils/helpers/docutils/docutils/languages/en.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/languages/en.py   2004-03-07 
00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/languages/en.py   2004-03-07 
06:27:44 UTC (rev 5249)
@@ -0,0 +1,55 @@
+# Author: David Goodger
+# Contact: address@hidden
+# Revision: $Revision: 1.5 $
+# Date: $Date: 2003/03/27 00:21:21 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+English-language mappings for language-dependent features of Docutils.
+"""
+
+__docformat__ = 'reStructuredText'
+
+labels = {
+      'author': 'Author',
+      'authors': 'Authors',
+      'organization': 'Organization',
+      'address': 'Address',
+      'contact': 'Contact',
+      'version': 'Version',
+      'revision': 'Revision',
+      'status': 'Status',
+      'date': 'Date',
+      'copyright': 'Copyright',
+      'dedication': 'Dedication',
+      'abstract': 'Abstract',
+      'attention': 'Attention!',
+      'caution': 'Caution!',
+      'danger': '!DANGER!',
+      'error': 'Error',
+      'hint': 'Hint',
+      'important': 'Important',
+      'note': 'Note',
+      'tip': 'Tip',
+      'warning': 'Warning',
+      'contents': 'Contents'}
+"""Mapping of node class name to label text."""
+
+bibliographic_fields = {
+      'author': 'author',
+      'authors': 'authors',
+      'organization': 'organization',
+      'address': 'address',
+      'contact': 'contact',
+      'version': 'version',
+      'revision': 'revision',
+      'status': 'status',
+      'date': 'date',
+      'copyright': 'copyright',
+      'dedication': 'dedication',
+      'abstract': 'abstract'}
+"""English (lowcased) to canonical name mapping for bibliographic fields."""
+
+author_separators = [';', ',']
+"""List of separator strings for the 'Authors' bibliographic field. Tried in
+order."""

Added: trunk/www/utils/helpers/docutils/docutils/languages/es.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/languages/es.py   2004-03-07 
00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/languages/es.py   2004-03-07 
06:27:44 UTC (rev 5249)
@@ -0,0 +1,55 @@
+# Author: Marcelo Huerta San Mart�n
+# Contact: address@hidden
+# Revision: $Revision: 1.1 $
+# Date: $Date: 2003/05/29 15:16:21 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+Spanish-language mappings for language-dependent features of Docutils.
+"""
+
+__docformat__ = 'reStructuredText'
+
+labels = {
+      'author': u'Autor',
+      'authors': u'Autores',
+      'organization': u'Organizaci\u00f3n',
+      'address': u'Direcci\u00f3n',
+      'contact': u'Contacto',
+      'version': u'Versi\u00f3n',
+      'revision': u'Revisi\u00f3n',
+      'status': u'Estado',
+      'date': u'Fecha',
+      'copyright': u'Copyright',
+      'dedication': u'Dedicatoria',
+      'abstract': u'Resumen',
+      'attention': u'\u00a1Atenci\u00f3n!',
+      'caution': u'\u00a1Precauci\u00f3n!',
+      'danger': u'\u00a1PELIGRO!',
+      'error': u'Error',
+      'hint': u'Sugerencia',
+      'important': u'Importante',
+      'note': u'Nota',
+      'tip': u'Consejo',
+      'warning': u'Advertencia',
+      'contents': u'Contenido'}
+"""Mapping of node class name to label text."""
+
+bibliographic_fields = {
+      u'autor': 'author',
+      u'autores': 'authors',
+      u'organizaci\u00f3n': 'organization',
+      u'direcci\u00f3n': 'address',
+      u'contacto': 'contact',
+      u'versi\u00f3n': 'version',
+      u'revisi\u00f3n': 'revision',
+      u'estado': 'status',
+      u'fecha': 'date',
+      u'copyright': 'copyright',
+      u'dedicatoria': 'dedication',
+      u'resumen': 'abstract'}
+"""Spanish (lowcased) to canonical name mapping for bibliographic fields."""
+
+author_separators = [';', ',']
+"""List of separator strings for the 'Authors' bibliographic field. Tried in
+order."""

Added: trunk/www/utils/helpers/docutils/docutils/languages/fr.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/languages/fr.py   2004-03-07 
00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/languages/fr.py   2004-03-07 
06:27:44 UTC (rev 5249)
@@ -0,0 +1,55 @@
+# Author: Stefane Fermigier
+# Contact: address@hidden
+# Revision: $Revision: 1.4 $
+# Date: $Date: 2003/04/03 01:11:40 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+French-language mappings for language-dependent features of Docutils.
+"""
+
+__docformat__ = 'reStructuredText'
+
+labels = {
+      u'author': u'Auteur',
+      u'authors': u'Auteurs',
+      u'organization': u'Organisation',
+      u'address': u'Adresse',
+      u'contact': u'Contact',
+      u'version': u'Version',
+      u'revision': u'R\u00e9vision',
+      u'status': u'Statut',
+      u'date': u'Date',
+      u'copyright': u'Copyright',
+      u'dedication': u'D\u00e9dicace',
+      u'abstract': u'R\u00e9sum\u00e9',
+      u'attention': u'Attention!',
+      u'caution': u'Avertissement!',
+      u'danger': u'!DANGER!',
+      u'error': u'Erreur',
+      u'hint': u'Indication',
+      u'important': u'Important',
+      u'note': u'Note',
+      u'tip': u'Astuce',
+      u'warning': u'Avis',
+      u'contents': u'Contenu'}
+"""Mapping of node class name to label text."""
+
+bibliographic_fields = {
+      u'auteur': u'author',
+      u'auteurs': u'authors',
+      u'organisation': u'organization',
+      u'adresse': u'address',
+      u'contact': u'contact',
+      u'version': u'version',
+      u'r\u00e9vision': u'revision',
+      u'statut': u'status',
+      u'date': u'date',
+      u'copyright': u'copyright',
+      u'd\u00e9dicace': u'dedication',
+      u'r\u00e9sum\u00e9': u'abstract'}
+"""French (lowcased) to canonical name mapping for bibliographic fields."""
+
+author_separators = [';', ',']
+"""List of separator strings for the 'Authors' bibliographic field. Tried in
+order."""

Added: trunk/www/utils/helpers/docutils/docutils/languages/it.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/languages/it.py   2004-03-07 
00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/languages/it.py   2004-03-07 
06:27:44 UTC (rev 5249)
@@ -0,0 +1,55 @@
+# Author: Nicola Larosa
+# Contact: address@hidden
+# Revision: $Revision: 1.2 $
+# Date: $Date: 2003/03/27 00:21:21 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+Italian-language mappings for language-dependent features of Docutils.
+"""
+
+__docformat__ = 'reStructuredText'
+
+labels = {
+      'author': 'Autore',
+      'authors': 'Autori',
+      'organization': 'Organizzazione',
+      'address': 'Indirizzo',
+      'contact': 'Contatti',
+      'version': 'Versione',
+      'revision': 'Revisione',
+      'status': 'Status',
+      'date': 'Data',
+      'copyright': 'Copyright',
+      'dedication': 'Dedica',
+      'abstract': 'Riassunto',
+      'attention': 'Attenzione!',
+      'caution': 'Cautela!',
+      'danger': '!PERICOLO!',
+      'error': 'Errore',
+      'hint': 'Suggerimento',
+      'important': 'Importante',
+      'note': 'Nota',
+      'tip': 'Consiglio',
+      'warning': 'Avvertenza',
+      'contents': 'Indice'}
+"""Mapping of node class name to label text."""
+
+bibliographic_fields = {
+      'autore': 'author',
+      'autori': 'authors',
+      'organizzazione': 'organization',
+      'indirizzo': 'address',
+      'contatti': 'contact',
+      'versione': 'version',
+      'revisione': 'revision',
+      'status': 'status',
+      'data': 'date',
+      'copyright': 'copyright',
+      'dedica': 'dedication',
+      'riassunto': 'abstract'}
+"""Italian (lowcased) to canonical name mapping for bibliographic fields."""
+
+author_separators = [';', ',']
+"""List of separator strings for the 'Authors' bibliographic field. Tried in
+order."""

Added: trunk/www/utils/helpers/docutils/docutils/languages/sk.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/languages/sk.py   2004-03-07 
00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/languages/sk.py   2004-03-07 
06:27:44 UTC (rev 5249)
@@ -0,0 +1,55 @@
+# :Author: Miroslav Vasko
+# :Contact: address@hidden
+# :Revision: $Revision: 1.3 $
+# :Date: $Date: 2003/03/27 00:21:21 $
+# :Copyright: This module has been placed in the public domain.
+
+"""
+Slovak-language mappings for language-dependent features of Docutils.
+"""
+
+__docformat__ = 'reStructuredText'
+
+labels = {
+      'author': u'Autor',
+      'authors': u'Autori',
+      'organization': u'Organiz\u00E1cia',
+      'address': u'Adresa',
+      'contact': u'Kontakt',
+      'version': u'Verzia',
+      'revision': u'Rev\u00EDzia',
+      'status': u'Stav',
+      'date': u'D\u00E1tum',
+      'copyright': u'Copyright',
+      'dedication': u'Venovanie',
+      'abstract': u'Abstraktne',
+      'attention': u'Pozor!',
+      'caution': u'Opatrne!',
+      'danger': u'!NEBEZPE\u010cENSTVO!',
+      'error': u'Chyba',
+      'hint': u'Rada',
+      'important': u'D\u00F4le\u017Eit\u00E9',
+      'note': u'Pozn\u00E1mka',
+      'tip': u'Tip',
+      'warning': u'Varovanie',
+      'contents': u'Obsah'}
+"""Mapping of node class name to label text."""
+
+bibliographic_fields = {
+      u'autor': 'author',
+      u'autori': 'authors',
+      u'organiz\u00E1cia': 'organization',
+      u'adresa': 'address',
+      u'kontakt': 'contact',
+      u'verzia': 'version',
+      u'rev\u00EDzia': 'revision',
+      u'stav': 'status',
+      u'd\u00E1tum': 'date',
+      u'copyright': 'copyright',
+      u'venovanie': 'dedication',
+      u'abstraktne': 'abstract'}
+"""Slovak (lowcased) to canonical name mapping for bibliographic fields."""
+
+author_separators = [';', ',']
+"""List of separator strings for the 'Authors' bibliographic field. Tried in
+order."""

Added: trunk/www/utils/helpers/docutils/docutils/languages/sv.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/languages/sv.py   2004-03-07 
00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/languages/sv.py   2004-03-07 
06:27:44 UTC (rev 5249)
@@ -0,0 +1,55 @@
+# Author:    Adam Chodorowski
+# Contact:   address@hidden
+# Revision:  $Revision: 1.9 $
+# Date:      $Date: 2003/03/27 00:21:21 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+Swedish language mappings for language-dependent features of Docutils.
+"""
+
+__docformat__ = 'reStructuredText'
+
+labels = {
+    'author':       u'F\u00f6rfattare',
+    'authors':      u'F\u00f6rfattare',
+    'organization': u'Organisation',
+    'address':      u'Adress',
+    'contact':      u'Kontakt',
+    'version':      u'Version',
+    'revision':     u'Revision',
+    'status':       u'Status',
+    'date':         u'Datum',
+    'copyright':    u'Copyright',
+    'dedication':   u'Dedikation',
+    'abstract':     u'Sammanfattning',
+    'attention':    u'Observera!',
+    'caution':      u'Varning!',
+    'danger':       u'FARA!',
+    'error':        u'Fel',
+    'hint':         u'V\u00e4gledning',
+    'important':    u'Viktigt',
+    'note':         u'Notera',
+    'tip':          u'Tips',
+    'warning':      u'Varning',
+    'contents':     u'Inneh\u00e5ll' }
+"""Mapping of node class name to label text."""
+
+bibliographic_fields = {
+    # 'Author' and 'Authors' identical in Swedish; assume the plural:
+    u'f\u00f6rfattare': 'authors',
+    u'organisation':    'organization',
+    u'adress':          'address',
+    u'kontakt':         'contact',
+    u'version':         'version',
+    u'revision':        'revision',
+    u'status':          'status',
+    u'datum':           'date',
+    u'copyright':       'copyright',
+    u'dedikation':      'dedication', 
+    u'sammanfattning':  'abstract' }
+"""Swedish (lowcased) to canonical name mapping for bibliographic fields."""
+
+author_separators = [';', ',']
+"""List of separator strings for the 'Authors' bibliographic field. Tried in
+order."""

Added: trunk/www/utils/helpers/docutils/docutils/nodes.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/nodes.py  2004-03-07 00:58:55 UTC 
(rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/nodes.py  2004-03-07 06:27:44 UTC 
(rev 5249)
@@ -0,0 +1,1478 @@
+# Author: David Goodger
+# Contact: address@hidden
+# Revision: $Revision: 1.42 $
+# Date: $Date: 2003/06/03 02:17:27 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+Docutils document tree element class library.
+
+Classes in CamelCase are abstract base classes or auxiliary classes. The one
+exception is `Text`, for a text (PCDATA) node; uppercase is used to
+differentiate from element classes.  Classes in lower_case_with_underscores
+are element classes, matching the XML element generic identifiers in the DTD_.
+
+The position of each node (the level at which it can occur) is significant and
+is represented by abstract base classes (`Root`, `Structural`, `Body`,
+`Inline`, etc.).  Certain transformations will be easier because we can use
+``isinstance(node, base_class)`` to determine the position of the node in the
+hierarchy.
+
+.. _DTD: http://docutils.sourceforge.net/spec/docutils.dtd
+"""
+
+__docformat__ = 'reStructuredText'
+
+import sys
+import os
+import re
+import xml.dom.minidom
+from types import IntType, SliceType, StringType, UnicodeType, \
+     TupleType, ListType
+from UserString import UserString
+
+
+# ==============================
+#  Functional Node Base Classes
+# ==============================
+
+class Node:
+
+    """Abstract base class of nodes in a document tree."""
+
+    parent = None
+    """Back-reference to the Node immediately containing this Node."""
+
+    document = None
+    """The `document` node at the root of the tree containing this Node."""
+
+    source = None
+    """Path or description of the input source which generated this Node."""
+
+    line = None
+    """The line number (1-based) of the beginning of this Node in `source`."""
+
+    def __nonzero__(self):
+        """
+        Node instances are always true, even if they're empty.  A node is more
+        than a simple container.  Its boolean "truth" does not depend on
+        having one or more subnodes in the doctree.
+
+        Use `len()` to check node length.  Use `None` to represent a boolean
+        false value.
+        """
+        return 1
+
+    def asdom(self, dom=xml.dom.minidom):
+        """Return a DOM **fragment** representation of this Node."""
+        domroot = dom.Document()
+        return self._dom_node(domroot)
+
+    def pformat(self, indent='    ', level=0):
+        """Return an indented pseudo-XML representation, for test purposes."""
+        raise NotImplementedError
+
+    def copy(self):
+        """Return a copy of self."""
+        raise NotImplementedError
+
+    def setup_child(self, child):
+        child.parent = self
+        if self.document:
+            child.document = self.document
+            if child.source is None:
+                child.source = self.document.current_source
+            if child.line is None:
+                child.line = self.document.current_line
+
+    def walk(self, visitor):
+        """
+        Traverse a tree of `Node` objects, calling ``visit_...`` methods of
+        `visitor` when entering each node. If there is no
+        ``visit_particular_node`` method for a node of type
+        ``particular_node``, the ``unknown_visit`` method is called.  (The
+        `walkabout()` method is similar, except it also calls ``depart_...``
+        methods before exiting each node.)
+
+        This tree traversal supports limited in-place tree
+        modifications.  Replacing one node with one or more nodes is
+        OK, as is removing an element.  However, if the node removed
+        or replaced occurs after the current node, the old node will
+        still be traversed, and any new nodes will not.
+
+        Within ``visit_...`` methods (and ``depart_...`` methods for
+        `walkabout()`), `TreePruningException` subclasses may be raised
+        (`SkipChildren`, `SkipSiblings`, `SkipNode`, `SkipDeparture`).
+
+        Parameter `visitor`: A `NodeVisitor` object, containing a
+        ``visit_...`` method for each `Node` subclass encountered.
+        """
+        name = 'visit_' + self.__class__.__name__
+        method = getattr(visitor, name, visitor.unknown_visit)
+        visitor.document.reporter.debug(name, category='nodes.Node.walk')
+        try:
+            method(self)
+        except (SkipChildren, SkipNode):
+            return
+        except SkipDeparture:           # not applicable; ignore
+            pass
+        children = self.get_children()
+        try:
+            for child in children[:]:
+                child.walk(visitor)
+        except SkipSiblings:
+            pass
+
+    def walkabout(self, visitor):
+        """
+        Perform a tree traversal similarly to `Node.walk()` (which see),
+        except also call ``depart_...`` methods before exiting each node. If
+        there is no ``depart_particular_node`` method for a node of type
+        ``particular_node``, the ``unknown_departure`` method is called.
+
+        Parameter `visitor`: A `NodeVisitor` object, containing ``visit_...``
+        and ``depart_...`` methods for each `Node` subclass encountered.
+        """
+        call_depart = 1
+        name = 'visit_' + self.__class__.__name__
+        method = getattr(visitor, name, visitor.unknown_visit)
+        visitor.document.reporter.debug(name, category='nodes.Node.walkabout')
+        try:
+            try:
+                method(self)
+            except SkipNode:
+                return
+            except SkipDeparture:
+                call_depart = 0
+            children = self.get_children()
+            try:
+                for child in children[:]:
+                    child.walkabout(visitor)
+            except SkipSiblings:
+                pass
+        except SkipChildren:
+            pass
+        if call_depart:
+            name = 'depart_' + self.__class__.__name__
+            method = getattr(visitor, name, visitor.unknown_departure)
+            visitor.document.reporter.debug(
+                  name, category='nodes.Node.walkabout')
+            method(self)
+
+
+class Text(Node, UserString):
+
+    """
+    Instances are terminal nodes (leaves) containing text only; no child
+    nodes or attributes.  Initialize by passing a string to the constructor.
+    Access the text itself with the `astext` method.
+    """
+
+    tagname = '#text'
+
+    def __init__(self, data, rawsource=''):
+        UserString.__init__(self, data)
+
+        self.rawsource = rawsource
+        """The raw text from which this element was constructed."""
+
+    def __repr__(self):
+        data = repr(self.data)
+        if len(data) > 70:
+            data = repr(self.data[:64] + ' ...')
+        return '<%s: %s>' % (self.tagname, data)
+
+    def __len__(self):
+        return len(self.data)
+
+    def shortrepr(self):
+        data = repr(self.data)
+        if len(data) > 20:
+            data = repr(self.data[:16] + ' ...')
+        return '<%s: %s>' % (self.tagname, data)
+
+    def _dom_node(self, domroot):
+        return domroot.createTextNode(self.data)
+
+    def astext(self):
+        return self.data
+
+    def copy(self):
+        return self.__class__(self.data)
+
+    def pformat(self, indent='    ', level=0):
+        result = []
+        indent = indent * level
+        for line in self.data.splitlines():
+            result.append(indent + line + '\n')
+        return ''.join(result)
+
+    def get_children(self):
+        """Text nodes have no children. Return []."""
+        return []
+
+
+class Element(Node):
+
+    """
+    `Element` is the superclass to all specific elements.
+
+    Elements contain attributes and child nodes.  Elements emulate
+    dictionaries for attributes, indexing by attribute name (a string).  To
+    set the attribute 'att' to 'value', do::
+
+        element['att'] = 'value'
+
+    Elements also emulate lists for child nodes (element nodes and/or text
+    nodes), indexing by integer.  To get the first child node, use::
+
+        element[0]
+
+    Elements may be constructed using the ``+=`` operator.  To add one new
+    child node to element, do::
+
+        element += node
+
+    This is equivalent to ``element.append(node)``.
+
+    To add a list of multiple child nodes at once, use the same ``+=``
+    operator::
+
+        element += [node1, node2]
+
+    This is equivalent to ``element.extend([node1, node2])``.
+    """
+
+    tagname = None
+    """The element generic identifier. If None, it is set as an instance
+    attribute to the name of the class."""
+
+    child_text_separator = '\n\n'
+    """Separator for child nodes, used by `astext()` method."""
+
+    def __init__(self, rawsource='', *children, **attributes):
+        self.rawsource = rawsource
+        """The raw text from which this element was constructed."""
+
+        self.children = []
+        """List of child nodes (elements and/or `Text`)."""
+
+        self.extend(children)           # maintain parent info
+
+        self.attributes = {}
+        """Dictionary of attribute {name: value}."""
+
+        for att, value in attributes.items():
+            self.attributes[att.lower()] = value
+
+        if self.tagname is None:
+            self.tagname = self.__class__.__name__
+
+    def _dom_node(self, domroot):
+        element = domroot.createElement(self.tagname)
+        for attribute, value in self.attributes.items():
+            if isinstance(value, ListType):
+                value = ' '.join(['%s' % v for v in value])
+            element.setAttribute(attribute, '%s' % value)
+        for child in self.children:
+            element.appendChild(child._dom_node(domroot))
+        return element
+
+    def __repr__(self):
+        data = ''
+        for c in self.children:
+            data += c.shortrepr()
+            if len(data) > 60:
+                data = data[:56] + ' ...'
+                break
+        if self.hasattr('name'):
+            return '<%s "%s": %s>' % (self.__class__.__name__,
+                                      self.attributes['name'], data)
+        else:
+            return '<%s: %s>' % (self.__class__.__name__, data)
+
+    def shortrepr(self):
+        if self.hasattr('name'):
+            return '<%s "%s"...>' % (self.__class__.__name__,
+                                      self.attributes['name'])
+        else:
+            return '<%s...>' % self.tagname
+
+    def __str__(self):
+        return unicode(self).encode('raw_unicode_escape')
+
+    def __unicode__(self):
+        if self.children:
+            return u'%s%s%s' % (self.starttag(),
+                                 ''.join([str(c) for c in self.children]),
+                                 self.endtag())
+        else:
+            return self.emptytag()
+
+    def starttag(self):
+        parts = [self.tagname]
+        for name, value in self.attlist():
+            if value is None:           # boolean attribute
+                parts.append(name)
+            elif isinstance(value, ListType):
+                values = ['%s' % v for v in value]
+                parts.append('%s="%s"' % (name, ' '.join(values)))
+            else:
+                parts.append('%s="%s"' % (name, value))
+        return '<%s>' % ' '.join(parts)
+
+    def endtag(self):
+        return '</%s>' % self.tagname
+
+    def emptytag(self):
+        return u'<%s/>' % ' '.join([self.tagname] +
+                                    ['%s="%s"' % (n, v)
+                                     for n, v in self.attlist()])
+
+    def __len__(self):
+        return len(self.children)
+
+    def __getitem__(self, key):
+        if isinstance(key, UnicodeType) or isinstance(key, StringType):
+            return self.attributes[key]
+        elif isinstance(key, IntType):
+            return self.children[key]
+        elif isinstance(key, SliceType):
+            assert key.step in (None, 1), 'cannot handle slice with stride'
+            return self.children[key.start:key.stop]
+        else:
+            raise TypeError, ('element index must be an integer, a slice, or '
+                              'an attribute name string')
+
+    def __setitem__(self, key, item):
+        if isinstance(key, UnicodeType) or isinstance(key, StringType):
+            self.attributes[str(key)] = item
+        elif isinstance(key, IntType):
+            self.setup_child(item)
+            self.children[key] = item
+        elif isinstance(key, SliceType):
+            assert key.step in (None, 1), 'cannot handle slice with stride'
+            for node in item:
+                self.setup_child(node)
+            self.children[key.start:key.stop] = item
+        else:
+            raise TypeError, ('element index must be an integer, a slice, or '
+                              'an attribute name string')
+
+    def __delitem__(self, key):
+        if isinstance(key, UnicodeType) or isinstance(key, StringType):
+            del self.attributes[key]
+        elif isinstance(key, IntType):
+            del self.children[key]
+        elif isinstance(key, SliceType):
+            assert key.step in (None, 1), 'cannot handle slice with stride'
+            del self.children[key.start:key.stop]
+        else:
+            raise TypeError, ('element index must be an integer, a simple '
+                              'slice, or an attribute name string')
+
+    def __add__(self, other):
+        return self.children + other
+
+    def __radd__(self, other):
+        return other + self.children
+
+    def __iadd__(self, other):
+        """Append a node or a list of nodes to `self.children`."""
+        if isinstance(other, Node):
+            self.setup_child(other)
+            self.children.append(other)
+        elif other is not None:
+            for node in other:
+                self.setup_child(node)
+            self.children.extend(other)
+        return self
+
+    def astext(self):
+        return self.child_text_separator.join(
+              [child.astext() for child in self.children])
+
+    def attlist(self):
+        attlist = self.attributes.items()
+        attlist.sort()
+        return attlist
+
+    def get(self, key, failobj=None):
+        return self.attributes.get(key, failobj)
+
+    def hasattr(self, attr):
+        return self.attributes.has_key(attr)
+
+    def delattr(self, attr):
+        if self.attributes.has_key(attr):
+            del self.attributes[attr]
+
+    def setdefault(self, key, failobj=None):
+        return self.attributes.setdefault(key, failobj)
+
+    has_key = hasattr
+
+    def append(self, item):
+        self.setup_child(item)
+        self.children.append(item)
+
+    def extend(self, item):
+        for node in item:
+            self.setup_child(node)
+        self.children.extend(item)
+
+    def insert(self, index, item):
+        if isinstance(item, Node):
+            self.setup_child(item)
+            self.children.insert(index, item)
+        elif item is not None:
+            self[index:index] = item
+
+    def pop(self, i=-1):
+        return self.children.pop(i)
+
+    def remove(self, item):
+        self.children.remove(item)
+
+    def index(self, item):
+        return self.children.index(item)
+
+    def replace(self, old, new):
+        """Replace one child `Node` with another child or children."""
+        index = self.index(old)
+        if isinstance(new, Node):
+            self.setup_child(new)
+            self[index] = new
+        elif new is not None:
+            self[index:index+1] = new
+
+    def first_child_matching_class(self, childclass, start=0, end=sys.maxint):
+        """
+        Return the index of the first child whose class exactly matches.
+
+        Parameters:
+
+        - `childclass`: A `Node` subclass to search for, or a tuple of `Node`
+          classes. If a tuple, any of the classes may match.
+        - `start`: Initial index to check.
+        - `end`: Initial index to *not* check.
+        """
+        if not isinstance(childclass, TupleType):
+            childclass = (childclass,)
+        for index in range(start, min(len(self), end)):
+            for c in childclass:
+                if isinstance(self[index], c):
+                    return index
+        return None
+
+    def first_child_not_matching_class(self, childclass, start=0,
+                                       end=sys.maxint):
+        """
+        Return the index of the first child whose class does *not* match.
+
+        Parameters:
+
+        - `childclass`: A `Node` subclass to skip, or a tuple of `Node`
+          classes. If a tuple, none of the classes may match.
+        - `start`: Initial index to check.
+        - `end`: Initial index to *not* check.
+        """
+        if not isinstance(childclass, TupleType):
+            childclass = (childclass,)
+        for index in range(start, min(len(self), end)):
+            match = 0
+            for c in childclass:
+                if isinstance(self.children[index], c):
+                    match = 1
+                    break
+            if not match:
+                return index
+        return None
+
+    def pformat(self, indent='    ', level=0):
+        return ''.join(['%s%s\n' % (indent * level, self.starttag())] +
+                       [child.pformat(indent, level+1)
+                        for child in self.children])
+
+    def get_children(self):
+        """Return this element's children."""
+        return self.children
+
+    def copy(self):
+        return self.__class__(**self.attributes)
+
+    def set_class(self, name):
+        """Add a new name to the "class" attribute."""
+        self.attributes['class'] = (self.attributes.get('class', '') + ' '
+                                    + name.lower()).strip()
+
+
+class TextElement(Element):
+
+    """
+    An element which directly contains text.
+
+    Its children are all Text or TextElement nodes.
+    """
+
+    child_text_separator = ''
+    """Separator for child nodes, used by `astext()` method."""
+
+    def __init__(self, rawsource='', text='', *children, **attributes):
+        if text != '':
+            textnode = Text(text)
+            Element.__init__(self, rawsource, textnode, *children,
+                              **attributes)
+        else:
+            Element.__init__(self, rawsource, *children, **attributes)
+
+
+class FixedTextElement(TextElement):
+
+    """An element which directly contains preformatted text."""
+
+    def __init__(self, rawsource='', text='', *children, **attributes):
+        TextElement.__init__(self, rawsource, text, *children, **attributes)
+        self.attributes['xml:space'] = 'preserve'
+
+
+# ========
+#  Mixins
+# ========
+
+class Resolvable:
+
+    resolved = 0
+
+
+class BackLinkable:
+
+    def add_backref(self, refid):
+        self.setdefault('backrefs', []).append(refid)
+
+
+# ====================
+#  Element Categories
+# ====================
+
+class Root: pass
+
+class Titular: pass
+
+class PreDecorative:
+    """Category of Node which may occur before Decorative Nodes."""
+
+class PreBibliographic(PreDecorative):
+    """Category of Node which may occur before Bibliographic Nodes."""
+
+class Bibliographic(PreDecorative): pass
+
+class Decorative: pass
+
+class Structural: pass
+
+class Body: pass
+
+class General(Body): pass
+
+class Sequential(Body): pass
+
+class Admonition(Body): pass
+
+class Special(Body):
+    """Special internal body elements."""
+
+class Invisible:
+    """Internal elements that don't appear in output."""
+
+class Part: pass
+
+class Inline: pass
+
+class Referential(Resolvable): pass
+
+class Targetable(Resolvable):
+
+    referenced = 0
+
+class Labeled:
+    """Contains a `label` as its first element."""
+
+
+# ==============
+#  Root Element
+# ==============
+
+class document(Root, Structural, Element):
+
+    def __init__(self, settings, reporter, *args, **kwargs):
+        Element.__init__(self, *args, **kwargs)
+
+        self.current_source = None
+        """Path to or description of the input source being processed."""
+
+        self.current_line = None
+        """Line number (1-based) of `current_source`."""
+
+        self.settings = settings
+        """Runtime settings data record."""
+
+        self.reporter = reporter
+        """System message generator."""
+
+        self.external_targets = []
+        """List of external target nodes."""
+
+        self.internal_targets = []
+        """List of internal target nodes."""
+
+        self.indirect_targets = []
+        """List of indirect target nodes."""
+
+        self.substitution_defs = {}
+        """Mapping of substitution names to substitution_definition nodes."""
+
+        self.substitution_names = {}
+        """Mapping of case-normalized substitution names to case-sensitive
+        names."""
+
+        self.refnames = {}
+        """Mapping of names to lists of referencing nodes."""
+
+        self.refids = {}
+        """Mapping of ids to lists of referencing nodes."""
+
+        self.nameids = {}
+        """Mapping of names to unique id's."""
+
+        self.nametypes = {}
+        """Mapping of names to hyperlink type (boolean: True => explicit,
+        False => implicit."""
+
+        self.ids = {}
+        """Mapping of ids to nodes."""
+
+        self.substitution_refs = {}
+        """Mapping of substitution names to lists of substitution_reference
+        nodes."""
+
+        self.footnote_refs = {}
+        """Mapping of footnote labels to lists of footnote_reference nodes."""
+
+        self.citation_refs = {}
+        """Mapping of citation labels to lists of citation_reference nodes."""
+
+        self.anonymous_targets = []
+        """List of anonymous target nodes."""
+
+        self.anonymous_refs = []
+        """List of anonymous reference nodes."""
+
+        self.autofootnotes = []
+        """List of auto-numbered footnote nodes."""
+
+        self.autofootnote_refs = []
+        """List of auto-numbered footnote_reference nodes."""
+
+        self.symbol_footnotes = []
+        """List of symbol footnote nodes."""
+
+        self.symbol_footnote_refs = []
+        """List of symbol footnote_reference nodes."""
+
+        self.footnotes = []
+        """List of manually-numbered footnote nodes."""
+
+        self.citations = []
+        """List of citation nodes."""
+
+        self.autofootnote_start = 1
+        """Initial auto-numbered footnote number."""
+
+        self.symbol_footnote_start = 0
+        """Initial symbol footnote symbol index."""
+
+        self.id_start = 1
+        """Initial ID number."""
+
+        self.parse_messages = []
+        """System messages generated while parsing."""
+
+        self.transform_messages = []
+        """System messages generated while applying transforms."""
+
+        import docutils.transforms
+        self.transformer = docutils.transforms.Transformer(self)
+        """Storage for transforms to be applied to this document."""
+
+        self.document = self
+
+    def asdom(self, dom=xml.dom.minidom):
+        """Return a DOM representation of this document."""
+        domroot = dom.Document()
+        domroot.appendChild(self._dom_node(domroot))
+        return domroot
+
+    def set_id(self, node, msgnode=None):
+        if node.has_key('id'):
+            id = node['id']
+            if self.ids.has_key(id) and self.ids[id] is not node:
+                msg = self.reporter.severe('Duplicate ID: "%s".' % id)
+                if msgnode != None:
+                    msgnode += msg
+        else:
+            if node.has_key('name'):
+                id = make_id(node['name'])
+            else:
+                id = ''
+            while not id or self.ids.has_key(id):
+                id = 'id%s' % self.id_start
+                self.id_start += 1
+            node['id'] = id
+        self.ids[id] = node
+        return id
+
+    def set_name_id_map(self, node, id, msgnode=None, explicit=None):
+        """
+        `self.nameids` maps names to IDs, while `self.nametypes` maps names to
+        booleans representing hyperlink type (True==explicit,
+        False==implicit).  This method updates the mappings.
+
+        The following state transition table shows how `self.nameids` ("ids")
+        and `self.nametypes` ("types") change with new input (a call to this
+        method), and what actions are performed:
+
+        ====  =====  ========  ========  =======  ====  =====  =====
+         Old State    Input          Action        New State   Notes
+        -----------  --------  -----------------  -----------  -----
+        ids   types  new type  sys.msg.  dupname  ids   types
+        ====  =====  ========  ========  =======  ====  =====  =====
+        --    --     explicit  --        --       new   True
+        --    --     implicit  --        --       new   False
+        None  False  explicit  --        --       new   True
+        old   False  explicit  implicit  old      new   True
+        None  True   explicit  explicit  new      None  True
+        old   True   explicit  explicit  new,old  None  True   [#]_
+        None  False  implicit  implicit  new      None  False
+        old   False  implicit  implicit  new,old  None  False
+        None  True   implicit  implicit  new      None  True
+        old   True   implicit  implicit  new      old   True
+        ====  =====  ========  ========  =======  ====  =====  =====
+
+        .. [#] Do not clear the name-to-id map or invalidate the old target if
+           both old and new targets are external and refer to identical URIs.
+           The new target is invalidated regardless.
+        """
+        if node.has_key('name'):
+            name = node['name']
+            if self.nameids.has_key(name):
+                self.set_duplicate_name_id(node, id, name, msgnode, explicit)
+            else:
+                self.nameids[name] = id
+                self.nametypes[name] = explicit
+
+    def set_duplicate_name_id(self, node, id, name, msgnode, explicit):
+        old_id = self.nameids[name]
+        old_explicit = self.nametypes[name]
+        self.nametypes[name] = old_explicit or explicit
+        if explicit:
+            if old_explicit:
+                level = 2
+                if old_id is not None:
+                    old_node = self.ids[old_id]
+                    if node.has_key('refuri'):
+                        refuri = node['refuri']
+                        if old_node.has_key('name') \
+                               and old_node.has_key('refuri') \
+                               and old_node['refuri'] == refuri:
+                            level = 1   # just inform if refuri's identical
+                    if level > 1:
+                        dupname(old_node)
+                        self.nameids[name] = None
+                msg = self.reporter.system_message(
+                    level, 'Duplicate explicit target name: "%s".' % name,
+                    backrefs=[id], base_node=node)
+                if msgnode != None:
+                    msgnode += msg
+                dupname(node)
+            else:
+                self.nameids[name] = id
+                if old_id is not None:
+                    old_node = self.ids[old_id]
+                    dupname(old_node)
+        else:
+            if old_id is not None and not old_explicit:
+                self.nameids[name] = None
+                old_node = self.ids[old_id]
+                dupname(old_node)
+            dupname(node)
+        if not explicit or (not old_explicit and old_id is not None):
+            msg = self.reporter.info(
+                'Duplicate implicit target name: "%s".' % name,
+                backrefs=[id], base_node=node)
+            if msgnode != None:
+                msgnode += msg
+
+    def has_name(self, name):
+        return self.nameids.has_key(name)
+
+    def note_implicit_target(self, target, msgnode=None):
+        id = self.set_id(target, msgnode)
+        self.set_name_id_map(target, id, msgnode, explicit=None)
+
+    def note_explicit_target(self, target, msgnode=None):
+        id = self.set_id(target, msgnode)
+        self.set_name_id_map(target, id, msgnode, explicit=1)
+
+    def note_refname(self, node):
+        self.refnames.setdefault(node['refname'], []).append(node)
+
+    def note_refid(self, node):
+        self.refids.setdefault(node['refid'], []).append(node)
+
+    def note_external_target(self, target):
+        self.external_targets.append(target)
+
+    def note_internal_target(self, target):
+        self.internal_targets.append(target)
+
+    def note_indirect_target(self, target):
+        self.indirect_targets.append(target)
+        if target.has_key('name'):
+            self.note_refname(target)
+
+    def note_anonymous_target(self, target):
+        self.set_id(target)
+        self.anonymous_targets.append(target)
+
+    def note_anonymous_ref(self, ref):
+        self.anonymous_refs.append(ref)
+
+    def note_autofootnote(self, footnote):
+        self.set_id(footnote)
+        self.autofootnotes.append(footnote)
+
+    def note_autofootnote_ref(self, ref):
+        self.set_id(ref)
+        self.autofootnote_refs.append(ref)
+
+    def note_symbol_footnote(self, footnote):
+        self.set_id(footnote)
+        self.symbol_footnotes.append(footnote)
+
+    def note_symbol_footnote_ref(self, ref):
+        self.set_id(ref)
+        self.symbol_footnote_refs.append(ref)
+
+    def note_footnote(self, footnote):
+        self.set_id(footnote)
+        self.footnotes.append(footnote)
+
+    def note_footnote_ref(self, ref):
+        self.set_id(ref)
+        self.footnote_refs.setdefault(ref['refname'], []).append(ref)
+        self.note_refname(ref)
+
+    def note_citation(self, citation):
+        self.citations.append(citation)
+
+    def note_citation_ref(self, ref):
+        self.set_id(ref)
+        self.citation_refs.setdefault(ref['refname'], []).append(ref)
+        self.note_refname(ref)
+
+    def note_substitution_def(self, subdef, def_name, msgnode=None):
+        name = subdef['name'] = whitespace_normalize_name(def_name)
+        if self.substitution_defs.has_key(name):
+            msg = self.reporter.error(
+                  'Duplicate substitution definition name: "%s".' % name,
+                  base_node=subdef)
+            if msgnode != None:
+                msgnode += msg
+            oldnode = self.substitution_defs[name]
+            dupname(oldnode)
+        # keep only the last definition:
+        self.substitution_defs[name] = subdef
+        # case-insensitive mapping:
+        self.substitution_names[fully_normalize_name(name)] = name
+
+    def note_substitution_ref(self, subref, refname):
+        name = subref['refname'] = whitespace_normalize_name(refname)
+        self.substitution_refs.setdefault(name, []).append(subref)
+
+    def note_pending(self, pending, priority=None):
+        self.transformer.add_pending(pending, priority)
+
+    def note_parse_message(self, message):
+        self.parse_messages.append(message)
+
+    def note_transform_message(self, message):
+        self.transform_messages.append(message)
+
+    def note_source(self, source, offset):
+        self.current_source = source
+        if offset is None:
+            self.current_line = offset
+        else:
+            self.current_line = offset + 1
+
+    def copy(self):
+        return self.__class__(self.settings, self.reporter,
+                              **self.attributes)
+
+
+# ================
+#  Title Elements
+# ================
+
+class title(Titular, PreBibliographic, TextElement): pass
+class subtitle(Titular, PreBibliographic, TextElement): pass
+class rubric(Titular, TextElement): pass
+
+
+# ========================
+#  Bibliographic Elements
+# ========================
+
+class docinfo(Bibliographic, Element): pass
+class author(Bibliographic, TextElement): pass
+class authors(Bibliographic, Element): pass
+class organization(Bibliographic, TextElement): pass
+class address(Bibliographic, FixedTextElement): pass
+class contact(Bibliographic, TextElement): pass
+class version(Bibliographic, TextElement): pass
+class revision(Bibliographic, TextElement): pass
+class status(Bibliographic, TextElement): pass
+class date(Bibliographic, TextElement): pass
+class copyright(Bibliographic, TextElement): pass
+
+
+# =====================
+#  Decorative Elements
+# =====================
+
+class decoration(Decorative, Element): pass
+class header(Decorative, Element): pass
+class footer(Decorative, Element): pass
+
+
+# =====================
+#  Structural Elements
+# =====================
+
+class section(Structural, Element): pass
+
+
+class topic(Structural, Element):
+
+    """
+    Topics are terminal, "leaf" mini-sections, like block quotes with titles,
+    or textual figures.  A topic is just like a section, except that it has no
+    subsections, and it doesn't have to conform to section placement rules.
+
+    Topics are allowed wherever body elements (list, table, etc.) are allowed,
+    but only at the top level of a section or document.  Topics cannot nest
+    inside topics, sidebars, or body elements; you can't have a topic inside a
+    table, list, block quote, etc.
+    """
+
+
+class sidebar(Structural, Element):
+
+    """
+    Sidebars are like miniature, parallel documents that occur inside other
+    documents, providing related or reference material.  A sidebar is
+    typically offset by a border and "floats" to the side of the page; the
+    document's main text may flow around it.  Sidebars can also be likened to
+    super-footnotes; their content is outside of the flow of the document's
+    main text.
+
+    Sidebars are allowed wherever body elements (list, table, etc.) are
+    allowed, but only at the top level of a section or document.  Sidebars
+    cannot nest inside sidebars, topics, or body elements; you can't have a
+    sidebar inside a table, list, block quote, etc.
+    """
+
+
+class transition(Structural, Element): pass
+
+
+# ===============
+#  Body Elements
+# ===============
+
+class paragraph(General, TextElement): pass
+class bullet_list(Sequential, Element): pass
+class enumerated_list(Sequential, Element): pass
+class list_item(Part, Element): pass
+class definition_list(Sequential, Element): pass
+class definition_list_item(Part, Element): pass
+class term(Part, TextElement): pass
+class classifier(Part, TextElement): pass
+class definition(Part, Element): pass
+class field_list(Sequential, Element): pass
+class field(Part, Element): pass
+class field_name(Part, TextElement): pass
+class field_body(Part, Element): pass
+
+
+class option(Part, Element):
+
+    child_text_separator = ''
+
+
+class option_argument(Part, TextElement):
+
+    def astext(self):
+        return self.get('delimiter', ' ') + TextElement.astext(self)
+
+
+class option_group(Part, Element):
+
+    child_text_separator = ', '
+
+
+class option_list(Sequential, Element): pass
+
+
+class option_list_item(Part, Element):
+
+    child_text_separator = '  '
+
+
+class option_string(Part, TextElement): pass
+class description(Part, Element): pass
+class literal_block(General, FixedTextElement): pass
+class doctest_block(General, FixedTextElement): pass
+class line_block(General, FixedTextElement): pass
+class block_quote(General, Element): pass
+class attribution(Part, TextElement): pass
+class attention(Admonition, Element): pass
+class caution(Admonition, Element): pass
+class danger(Admonition, Element): pass
+class error(Admonition, Element): pass
+class important(Admonition, Element): pass
+class note(Admonition, Element): pass
+class tip(Admonition, Element): pass
+class hint(Admonition, Element): pass
+class warning(Admonition, Element): pass
+class admonition(Admonition, Element): pass
+class comment(Special, Invisible, PreBibliographic, FixedTextElement): pass
+class substitution_definition(Special, Invisible, TextElement): pass
+class target(Special, Invisible, Inline, TextElement, Targetable): pass
+class footnote(General, Element, Labeled, BackLinkable): pass
+class citation(General, Element, Labeled, BackLinkable): pass
+class label(Part, TextElement): pass
+class figure(General, Element): pass
+class caption(Part, TextElement): pass
+class legend(Part, Element): pass
+class table(General, Element): pass
+class tgroup(Part, Element): pass
+class colspec(Part, Element): pass
+class thead(Part, Element): pass
+class tbody(Part, Element): pass
+class row(Part, Element): pass
+class entry(Part, Element): pass
+
+
+class system_message(Special, PreBibliographic, Element, BackLinkable):
+
+    def __init__(self, message=None, *children, **attributes):
+        if message:
+            p = paragraph('', message)
+            children = (p,) + children
+        try:
+            Element.__init__(self, '', *children, **attributes)
+        except:
+            print 'system_message: children=%r' % (children,)
+            raise
+
+    def astext(self):
+        line = self.get('line', '')
+        return u'%s:%s: (%s/%s) %s' % (self['source'], line, self['type'],
+                                       self['level'], Element.astext(self))
+
+
+class pending(Special, Invisible, PreBibliographic, Element):
+
+    """
+    The "pending" element is used to encapsulate a pending operation: the
+    operation (transform), the point at which to apply it, and any data it
+    requires.  Only the pending operation's location within the document is
+    stored in the public document tree (by the "pending" object itself); the
+    operation and its data are stored in the "pending" object's internal
+    instance attributes.
+
+    For example, say you want a table of contents in your reStructuredText
+    document.  The easiest way to specify where to put it is from within the
+    document, with a directive::
+
+        .. contents::
+
+    But the "contents" directive can't do its work until the entire document
+    has been parsed and possibly transformed to some extent.  So the directive
+    code leaves a placeholder behind that will trigger the second phase of the
+    its processing, something like this::
+
+        <pending ...public attributes...> + internal attributes
+
+    Use `document.note_pending()` so that the
+    `docutils.transforms.Transformer` stage of processing can run all pending
+    transforms.
+    """
+
+    def __init__(self, transform, details=None,
+                 rawsource='', *children, **attributes):
+        Element.__init__(self, rawsource, *children, **attributes)
+
+        self.transform = transform
+        """The `docutils.transforms.Transform` class implementing the pending
+        operation."""
+
+        self.details = details or {}
+        """Detail data (dictionary) required by the pending operation."""
+
+    def pformat(self, indent='    ', level=0):
+        internals = [
+              '.. internal attributes:',
+              '     .transform: %s.%s' % (self.transform.__module__,
+                                          self.transform.__name__),
+              '     .details:']
+        details = self.details.items()
+        details.sort()
+        for key, value in details:
+            if isinstance(value, Node):
+                internals.append('%7s%s:' % ('', key))
+                internals.extend(['%9s%s' % ('', line)
+                                  for line in value.pformat().splitlines()])
+            elif value and isinstance(value, ListType) \
+                  and isinstance(value[0], Node):
+                internals.append('%7s%s:' % ('', key))
+                for v in value:
+                    internals.extend(['%9s%s' % ('', line)
+                                      for line in v.pformat().splitlines()])
+            else:
+                internals.append('%7s%s: %r' % ('', key, value))
+        return (Element.pformat(self, indent, level)
+                + ''.join([('    %s%s\n' % (indent * level, line))
+                           for line in internals]))
+
+    def copy(self):
+        return self.__class__(self.transform, self.details, self.rawsource,
+                              **self.attribuates)
+
+
+class raw(Special, Inline, PreBibliographic, FixedTextElement):
+
+    """
+    Raw data that is to be passed untouched to the Writer.
+    """
+
+    pass
+
+
+# =================
+#  Inline Elements
+# =================
+
+class emphasis(Inline, TextElement): pass
+class strong(Inline, TextElement): pass
+class literal(Inline, TextElement): pass
+class reference(Inline, Referential, TextElement): pass
+class footnote_reference(Inline, Referential, TextElement): pass
+class citation_reference(Inline, Referential, TextElement): pass
+class substitution_reference(Inline, TextElement): pass
+class title_reference(Inline, TextElement): pass
+class abbreviation(Inline, TextElement): pass
+class acronym(Inline, TextElement): pass
+class superscript(Inline, TextElement): pass
+class subscript(Inline, TextElement): pass
+
+
+class image(General, Inline, TextElement):
+
+    def astext(self):
+        return self.get('alt', '')
+
+
+class inline(Inline, TextElement): pass
+class problematic(Inline, TextElement): pass
+class generated(Inline, TextElement): pass
+
+
+# ========================================
+#  Auxiliary Classes, Functions, and Data
+# ========================================
+
+node_class_names = """
+    Text
+    abbreviation acronym address admonition attention attribution author
+        authors
+    block_quote bullet_list
+    caption caution citation citation_reference classifier colspec comment
+        contact copyright
+    danger date decoration definition definition_list definition_list_item
+        description docinfo doctest_block document
+    emphasis entry enumerated_list error
+    field field_body field_list field_name figure footer
+        footnote footnote_reference
+    generated
+    header hint
+    image important inline
+    label legend line_block list_item literal literal_block
+    note
+    option option_argument option_group option_list option_list_item
+        option_string organization
+    paragraph pending problematic
+    raw reference revision row rubric
+    section sidebar status strong subscript substitution_definition
+        substitution_reference subtitle superscript system_message
+    table target tbody term tgroup thead tip title title_reference topic
+        transition
+    version
+    warning""".split()
+"""A list of names of all concrete Node subclasses."""
+
+
+class NodeVisitor:
+
+    """
+    "Visitor" pattern [GoF95]_ abstract superclass implementation for document
+    tree traversals.
+
+    Each node class has corresponding methods, doing nothing by default;
+    override individual methods for specific and useful behaviour.  The
+    "``visit_`` + node class name" method is called by `Node.walk()` upon
+    entering a node.  `Node.walkabout()` also calls the "``depart_`` + node
+    class name" method before exiting a node.
+
+    This is a base class for visitors whose ``visit_...`` & ``depart_...``
+    methods should be implemented for *all* node types encountered (such as
+    for `docutils.writers.Writer` subclasses).  Unimplemented methods will
+    raise exceptions.
+
+    For sparse traversals, where only certain node types are of interest,
+    subclass `SparseNodeVisitor` instead.  When (mostly or entirely) uniform
+    processing is desired, subclass `GenericNodeVisitor`.
+
+    .. [GoF95] Gamma, Helm, Johnson, Vlissides. *Design Patterns: Elements of
+       Reusable Object-Oriented Software*. Addison-Wesley, Reading, MA, USA,
+       1995.
+    """
+
+    def __init__(self, document):
+        self.document = document
+
+    def unknown_visit(self, node):
+        """
+        Called when entering unknown `Node` types.
+
+        Raise an exception unless overridden.
+        """
+        raise NotImplementedError('visiting unknown node type: %s'
+                                  % node.__class__.__name__)
+
+    def unknown_departure(self, node):
+        """
+        Called before exiting unknown `Node` types.
+
+        Raise exception unless overridden.
+        """
+        raise NotImplementedError('departing unknown node type: %s'
+                                  % node.__class__.__name__)
+
+
+class SparseNodeVisitor(NodeVisitor):
+
+    """
+    Base class for sparse traversals, where only certain node types are of
+    interest.  When ``visit_...`` & ``depart_...`` methods should be
+    implemented for *all* node types (such as for `docutils.writers.Writer`
+    subclasses), subclass `NodeVisitor` instead.
+    """
+
+    # Save typing with dynamic definitions.
+    for name in node_class_names:
+        exec """def visit_%s(self, node): pass\n""" % name
+        exec """def depart_%s(self, node): pass\n""" % name
+    del name
+
+
+class GenericNodeVisitor(NodeVisitor):
+
+    """
+    Generic "Visitor" abstract superclass, for simple traversals.
+
+    Unless overridden, each ``visit_...`` method calls `default_visit()`, and
+    each ``depart_...`` method (when using `Node.walkabout()`) calls
+    `default_departure()`. `default_visit()` (and `default_departure()`) must
+    be overridden in subclasses.
+
+    Define fully generic visitors by overriding `default_visit()` (and
+    `default_departure()`) only. Define semi-generic visitors by overriding
+    individual ``visit_...()`` (and ``depart_...()``) methods also.
+
+    `NodeVisitor.unknown_visit()` (`NodeVisitor.unknown_departure()`) should
+    be overridden for default behavior.
+    """
+
+    def default_visit(self, node):
+        """Override for generic, uniform traversals."""
+        raise NotImplementedError
+
+    def default_departure(self, node):
+        """Override for generic, uniform traversals."""
+        raise NotImplementedError
+
+    # Save typing with dynamic definitions.
+    for name in node_class_names:
+        exec """def visit_%s(self, node):
+                    self.default_visit(node)\n""" % name
+        exec """def depart_%s(self, node):
+                    self.default_departure(node)\n""" % name
+    del name
+
+
+class TreeCopyVisitor(GenericNodeVisitor):
+
+    """
+    Make a complete copy of a tree or branch, including element attributes.
+    """
+
+    def __init__(self, document):
+        GenericNodeVisitor.__init__(self, document)
+        self.parent_stack = []
+        self.parent = []
+
+    def get_tree_copy(self):
+        return self.parent[0]
+
+    def default_visit(self, node):
+        """Copy the current node, and make it the new acting parent."""
+        newnode = node.copy()
+        self.parent.append(newnode)
+        self.parent_stack.append(self.parent)
+        self.parent = newnode
+
+    def default_departure(self, node):
+        """Restore the previous acting parent."""
+        self.parent = self.parent_stack.pop()
+
+
+class TreePruningException(Exception):
+
+    """
+    Base class for `NodeVisitor`-related tree pruning exceptions.
+
+    Raise subclasses from within ``visit_...`` or ``depart_...`` methods
+    called from `Node.walk()` and `Node.walkabout()` tree traversals to prune
+    the tree traversed.
+    """
+
+    pass
+
+
+class SkipChildren(TreePruningException):
+
+    """
+    Do not visit any children of the current node.  The current node's
+    siblings and ``depart_...`` method are not affected.
+    """
+
+    pass
+
+
+class SkipSiblings(TreePruningException):
+
+    """
+    Do not visit any more siblings (to the right) of the current node.  The
+    current node's children and its ``depart_...`` method are not affected.
+    """
+
+    pass
+
+
+class SkipNode(TreePruningException):
+
+    """
+    Do not visit the current node's children, and do not call the current
+    node's ``depart_...`` method.
+    """
+
+    pass
+
+
+class SkipDeparture(TreePruningException):
+
+    """
+    Do not call the current node's ``depart_...`` method.  The current node's
+    children and siblings are not affected.
+    """
+
+    pass
+
+
+class NodeFound(TreePruningException):
+
+    """
+    Raise to indicate that the target of a search has been found.  This
+    exception must be caught by the client; it is not caught by the traversal
+    code.
+    """
+
+    pass
+
+
+def make_id(string):
+    """
+    Convert `string` into an identifier and return it.
+
+    Docutils identifiers will conform to the regular expression
+    ``[a-z](-?[a-z0-9]+)*``.  For CSS compatibility, identifiers (the "class"
+    and "id" attributes) should have no underscores, colons, or periods.
+    Hyphens may be used.
+
+    - The `HTML 4.01 spec`_ defines identifiers based on SGML tokens:
+
+          ID and NAME tokens must begin with a letter ([A-Za-z]) and may be
+          followed by any number of letters, digits ([0-9]), hyphens ("-"),
+          underscores ("_"), colons (":"), and periods (".").
+
+    - However the `CSS1 spec`_ defines identifiers based on the "name" token,
+      a tighter interpretation ("flex" tokenizer notation; "latin1" and
+      "escape" 8-bit characters have been replaced with entities)::
+
+          unicode     \\[0-9a-f]{1,4}
+          latin1      [&iexcl;-&yuml;]
+          escape      {unicode}|\\[ -~&iexcl;-&yuml;]
+          nmchar      [-a-z0-9]|{latin1}|{escape}
+          name        {nmchar}+
+
+    The CSS1 "nmchar" rule does not include underscores ("_"), colons (":"),
+    or periods ("."), therefore "class" and "id" attributes should not contain
+    these characters. They should be replaced with hyphens ("-"). Combined
+    with HTML's requirements (the first character must be a letter; no
+    "unicode", "latin1", or "escape" characters), this results in the
+    ``[a-z](-?[a-z0-9]+)*`` pattern.
+
+    .. _HTML 4.01 spec: http://www.w3.org/TR/html401
+    .. _CSS1 spec: http://www.w3.org/TR/REC-CSS1
+    """
+    id = _non_id_chars.sub('-', ' '.join(string.lower().split()))
+    id = _non_id_at_ends.sub('', id)
+    return str(id)
+
+_non_id_chars = re.compile('[^a-z0-9]+')
+_non_id_at_ends = re.compile('^[-0-9]+|-+$')
+
+def dupname(node):
+    node['dupname'] = node['name']
+    del node['name']
+
+def fully_normalize_name(name):
+    """Return a case- and whitespace-normalized name."""
+    return ' '.join(name.lower().split())
+
+def whitespace_normalize_name(name):
+    """Return a whitespace-normalized name."""
+    return ' '.join(name.split())

Added: trunk/www/utils/helpers/docutils/docutils/parsers/__init__.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/parsers/__init__.py       
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/parsers/__init__.py       
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,48 @@
+# Author: David Goodger
+# Contact: address@hidden
+# Revision: $Revision: 1.4 $
+# Date: $Date: 2002/10/24 00:40:03 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+This package contains Docutils parser modules.
+"""
+
+__docformat__ = 'reStructuredText'
+
+from docutils import Component
+
+
+class Parser(Component):
+
+    component_type = 'parser'
+
+    def parse(self, inputstring, document):
+        """Override to parse `inputstring` into document tree `document`."""
+        raise NotImplementedError('subclass must override this method')
+
+    def setup_parse(self, inputstring, document):
+        """Initial parse setup.  Call at start of `self.parse()`."""
+        self.inputstring = inputstring
+        self.document = document
+        document.reporter.attach_observer(document.note_parse_message)
+
+    def finish_parse(self):
+        """Finalize parse details.  Call at end of `self.parse()`."""
+        self.document.reporter.detach_observer(
+            self.document.note_parse_message)
+
+
+_parser_aliases = {
+      'restructuredtext': 'rst',
+      'rest': 'rst',
+      'restx': 'rst',
+      'rtxt': 'rst',}
+
+def get_parser_class(parser_name):
+    """Return the Parser class from the `parser_name` module."""
+    parser_name = parser_name.lower()
+    if _parser_aliases.has_key(parser_name):
+        parser_name = _parser_aliases[parser_name]
+    module = __import__(parser_name, globals(), locals())
+    return module.Parser

Added: trunk/www/utils/helpers/docutils/docutils/parsers/rst/__init__.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/parsers/rst/__init__.py   
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/parsers/rst/__init__.py   
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,123 @@
+# Author: David Goodger
+# Contact: address@hidden
+# Revision: $Revision: 1.12 $
+# Date: $Date: 2003/06/09 15:06:07 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+This is ``docutils.parsers.rst`` package. It exports a single class, `Parser`,
+the reStructuredText parser.
+
+
+Usage
+=====
+
+1. Create a parser::
+
+       parser = docutils.parsers.rst.Parser()
+
+   Several optional arguments may be passed to modify the parser's behavior.
+   Please see `Customizing the Parser`_ below for details.
+
+2. Gather input (a multi-line string), by reading a file or the standard
+   input::
+
+       input = sys.stdin.read()
+
+3. Create a new empty `docutils.nodes.document` tree::
+
+       document = docutils.utils.new_document(source, settings)
+
+   See `docutils.utils.new_document()` for parameter details.
+
+4. Run the parser, populating the document tree::
+
+       parser.parse(input, document)
+
+
+Parser Overview
+===============
+
+The reStructuredText parser is implemented as a state machine, examining its
+input one line at a time. To understand how the parser works, please first
+become familiar with the `docutils.statemachine` module, then see the
+`states` module.
+
+
+Customizing the Parser
+----------------------
+
+Anything that isn't already customizable is that way simply because that type
+of customizability hasn't been implemented yet.  Patches welcome!
+
+When instantiating an object of the `Parser` class, two parameters may be
+passed: ``rfc2822`` and ``inliner``.  Pass ``rfc2822=1`` to enable an initial
+RFC-2822 style header block, parsed as a "field_list" element (with "class"
+attribute set to "rfc2822").  Currently this is the only body-level element
+which is customizable without subclassing.  (Tip: subclass `Parser` and change
+its "state_classes" and "initial_state" attributes to refer to new classes.
+Contact the author if you need more details.)
+
+The ``inliner`` parameter takes an instance of `states.Inliner` or a subclass.
+It handles inline markup recognition.  A common extension is the addition of
+further implicit hyperlinks, like "RFC 2822".  This can be done by subclassing
+`states.Inliner`, adding a new method for the implicit markup, and adding a
+``(pattern, method)`` pair to the "implicit_dispatch" attribute of the
+subclass.  See `states.Inliner.implicit_inline()` for details.  Explicit
+inline markup can be customized in a `states.Inliner` subclass via the
+``patterns.initial`` and ``dispatch`` attributes (and new methods as
+appropriate).
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+import docutils.parsers
+import docutils.statemachine
+from docutils.parsers.rst import states
+
+
+class Parser(docutils.parsers.Parser):
+
+    """The reStructuredText parser."""
+
+    supported = ('restructuredtext', 'rst', 'rest', 'restx', 'rtxt', 'rstx')
+    """Aliases this parser supports."""
+
+    settings_spec = (
+        'reStructuredText Parser Options',
+        None,
+        (('Recognize and link to PEP references (like "PEP 258").',
+          ['--pep-references'],
+          {'action': 'store_true'}),
+         ('Recognize and link to RFC references (like "RFC 822").',
+          ['--rfc-references'],
+          {'action': 'store_true'}),
+         ('Set number of spaces for tab expansion (default 8).',
+          ['--tab-width'],
+          {'metavar': '<width>', 'type': 'int', 'default': 8}),
+         ('Remove spaces before footnote references.',
+          ['--trim-footnote-reference-space'],
+          {'action': 'store_true'}),))
+
+    def __init__(self, rfc2822=None, inliner=None):
+        if rfc2822:
+            self.initial_state = 'RFC2822Body'
+        else:
+            self.initial_state = 'Body'
+        self.state_classes = states.state_classes
+        self.inliner = inliner
+
+    def parse(self, inputstring, document):
+        """Parse `inputstring` and populate `document`, a document tree."""
+        self.setup_parse(inputstring, document)
+        debug = document.reporter[''].debug
+        self.statemachine = states.RSTStateMachine(
+              state_classes=self.state_classes,
+              initial_state=self.initial_state,
+              debug=debug)
+        inputlines = docutils.statemachine.string2lines(
+              inputstring, tab_width=document.settings.tab_width,
+              convert_whitespace=1)
+        self.statemachine.run(inputlines, document, inliner=self.inliner)
+        self.finish_parse()

Added: 
trunk/www/utils/helpers/docutils/docutils/parsers/rst/directives/__init__.py
===================================================================
--- 
trunk/www/utils/helpers/docutils/docutils/parsers/rst/directives/__init__.py    
    2004-03-07 00:58:55 UTC (rev 5248)
+++ 
trunk/www/utils/helpers/docutils/docutils/parsers/rst/directives/__init__.py    
    2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,252 @@
+# Author: David Goodger
+# Contact: address@hidden
+# Revision: $Revision: 1.21 $
+# Date: $Date: 2003/06/03 02:16:59 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+This package contains directive implementation modules.
+
+The interface for directive functions is as follows::
+
+    def directive_fn(name, arguments, options, content, lineno,
+                     content_offset, block_text, state, state_machine):
+        code...
+
+    # Set function attributes:
+    directive_fn.arguments = ...
+    directive_fn.options = ...
+    direcitve_fn.content = ...
+
+Parameters:
+
+- ``name`` is the directive type or name.
+
+- ``arguments`` is a list of positional arguments.
+
+- ``options`` is a dictionary mapping option names to values.
+
+- ``content`` is a list of strings, the directive content.
+
+- ``lineno`` is the line number of the first line of the directive.
+
+- ``content_offset`` is the line offset of the first line of the content from
+  the beginning of the current input.  Used when initiating a nested parse.
+
+- ``block_text`` is a string containing the entire directive.  Include it as
+  the content of a literal block in a system message if there is a problem.
+
+- ``state`` is the state which called the directive function.
+
+- ``state_machine`` is the state machine which controls the state which called
+  the directive function.
+
+Function attributes, interpreted by the directive parser (which calls the
+directive function):
+
+- ``arguments``: A 3-tuple specifying the expected positional arguments, or
+  ``None`` if the directive has no arguments.  The 3 items in the tuple are
+  ``(required, optional, whitespace OK in last argument)``:
+
+  1. The number of required arguments.
+  2. The number of optional arguments.
+  3. A boolean, indicating if the final argument may contain whitespace.
+
+  Arguments are normally single whitespace-separated words.  The final
+  argument may contain whitespace if the third item in the argument spec tuple
+  is 1/True.  If the form of the arguments is more complex, specify only one
+  argument (either required or optional) and indicate that final whitespace is
+  OK; the client code must do any context-sensitive parsing.
+
+- ``options``: A dictionary, mapping known option names to conversion
+  functions such as `int` or `float`.  ``None`` or an empty dict implies no
+  options to parse.
+
+- ``content``: A boolean; true if content is allowed.  Client code must handle
+  the case where content is required but not supplied (an empty content list
+  will be supplied).
+
+Directive functions return a list of nodes which will be inserted into the
+document tree at the point where the directive was encountered (can be an
+empty list).
+
+See `Creating reStructuredText Directives`_ for more information.
+
+.. _Creating reStructuredText Directives:
+   http://docutils.sourceforge.net/spec/howto/rst-directives.html
+"""
+
+__docformat__ = 'reStructuredText'
+
+from docutils import nodes
+from docutils.parsers.rst.languages import en as _fallback_language_module
+
+
+_directive_registry = {
+      'attention': ('admonitions', 'attention'),
+      'caution': ('admonitions', 'caution'),
+      'danger': ('admonitions', 'danger'),
+      'error': ('admonitions', 'error'),
+      'important': ('admonitions', 'important'),
+      'note': ('admonitions', 'note'),
+      'tip': ('admonitions', 'tip'),
+      'hint': ('admonitions', 'hint'),
+      'warning': ('admonitions', 'warning'),
+      'admonition': ('admonitions', 'admonition'),
+      'sidebar': ('body', 'sidebar'),
+      'topic': ('body', 'topic'),
+      'line-block': ('body', 'line_block'),
+      'parsed-literal': ('body', 'parsed_literal'),
+      'rubric': ('body', 'rubric'),
+      'epigraph': ('body', 'epigraph'),
+      'highlights': ('body', 'highlights'),
+      'pull-quote': ('body', 'pull_quote'),
+      #'questions': ('body', 'question_list'),
+      'image': ('images', 'image'),
+      'figure': ('images', 'figure'),
+      'contents': ('parts', 'contents'),
+      'sectnum': ('parts', 'sectnum'),
+      #'footnotes': ('parts', 'footnotes'),
+      #'citations': ('parts', 'citations'),
+      'target-notes': ('references', 'target_notes'),
+      'meta': ('html', 'meta'),
+      #'imagemap': ('html', 'imagemap'),
+      'raw': ('misc', 'raw'),
+      'include': ('misc', 'include'),
+      'replace': ('misc', 'replace'),
+      'unicode': ('misc', 'unicode_directive'),
+      'class': ('misc', 'class_directive'),
+      'restructuredtext-test-directive': ('misc', 'directive_test_function'),}
+"""Mapping of directive name to (module name, function name).  The directive
+name is canonical & must be lowercase.  Language-dependent names are defined
+in the ``language`` subpackage."""
+
+_modules = {}
+"""Cache of imported directive modules."""
+
+_directives = {}
+"""Cache of imported directive functions."""
+
+def directive(directive_name, language_module, document):
+    """
+    Locate and return a directive function from its language-dependent name.
+    If not found in the current language, check English.  Return None if the
+    named directive cannot be found.
+    """
+    normname = directive_name.lower()
+    messages = []
+    msg_text = []
+    if _directives.has_key(normname):
+        return _directives[normname], messages
+    canonicalname = None
+    try:
+        canonicalname = language_module.directives[normname]
+    except AttributeError, error:
+        msg_text.append('Problem retrieving directive entry from language '
+                        'module %r: %s.' % (language_module, error))
+    except KeyError:
+        msg_text.append('No directive entry for "%s" in module "%s".'
+                        % (directive_name, language_module.__name__))
+    if not canonicalname:
+        try:
+            canonicalname = _fallback_language_module.directives[normname]
+            msg_text.append('Using English fallback for directive "%s".'
+                            % directive_name)
+        except KeyError:
+            msg_text.append('Trying "%s" as canonical directive name.'
+                            % directive_name)
+            # The canonical name should be an English name, but just in case:
+            canonicalname = normname
+    if msg_text:
+        message = document.reporter.info(
+            '\n'.join(msg_text), line=document.current_line)
+        messages.append(message)
+    try:
+        modulename, functionname = _directive_registry[canonicalname]
+    except KeyError:
+        return None, messages
+    if _modules.has_key(modulename):
+        module = _modules[modulename]
+    else:
+        try:
+            module = __import__(modulename, globals(), locals())
+        except ImportError:
+            return None, messages
+    try:
+        function = getattr(module, functionname)
+        _directives[normname] = function
+    except AttributeError:
+        return None, messages
+    return function, messages
+
+def register_directive(name, directive):
+    """Register a nonstandard application-defined directive function."""
+    _directives[name] = directive
+
+def flag(argument):
+    """
+    Check for a valid flag option (no argument) and return ``None``.
+
+    Raise ``ValueError`` if an argument is found.
+    """
+    if argument and argument.strip():
+        raise ValueError('no argument is allowed; "%s" supplied' % argument)
+    else:
+        return None
+
+def unchanged(argument):
+    """
+    Return the argument, unchanged.
+
+    Raise ``ValueError`` if no argument is found.
+    """
+    if argument is None:
+        raise ValueError('argument required but none supplied')
+    else:
+        return argument  # unchanged!
+
+def path(argument):
+    """
+    Return the path argument unwrapped (with newlines removed).
+
+    Raise ``ValueError`` if no argument is found or if the path contains
+    internal whitespace.
+    """
+    if argument is None:
+        raise ValueError('argument required but none supplied')
+    else:
+        path = ''.join([s.strip() for s in argument.splitlines()])
+        if path.find(' ') == -1:
+            return path
+        else:
+            raise ValueError('path contains whitespace')
+
+def nonnegative_int(argument):
+    """
+    Check for a nonnegative integer argument; raise ``ValueError`` if not.
+    """
+    value = int(argument)
+    if value < 0:
+        raise ValueError('negative value; must be positive or zero')
+    return value
+
+def format_values(values):
+    return '%s, or "%s"' % (', '.join(['"%s"' % s for s in values[:-1]]),
+                            values[-1])
+
+def choice(argument, values):
+    try:
+        value = argument.lower().strip()
+    except AttributeError:
+        raise ValueError('must supply an argument; choose from %s'
+                         % format_values(values))
+    if value in values:
+        return value
+    else:
+        raise ValueError('"%s" unknown; choose from %s'
+                         % (argument, format_values(values)))
+
+def class_option(argument):
+    if argument is None:
+        raise ValueError('argument required but none supplied')
+    return nodes.make_id(argument)

Added: 
trunk/www/utils/helpers/docutils/docutils/parsers/rst/directives/admonitions.py
===================================================================
--- 
trunk/www/utils/helpers/docutils/docutils/parsers/rst/directives/admonitions.py 
    2004-03-07 00:58:55 UTC (rev 5248)
+++ 
trunk/www/utils/helpers/docutils/docutils/parsers/rst/directives/admonitions.py 
    2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,90 @@
+# Author: David Goodger
+# Contact: address@hidden
+# Revision: $Revision: 1.6 $
+# Date: $Date: 2003/05/24 20:47:13 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+Admonition directives.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+from docutils.parsers.rst import states, directives
+from docutils import nodes
+
+
+def make_admonition(node_class, name, arguments, options, content, lineno,
+                       content_offset, block_text, state, state_machine):
+    if not content:
+        error = state_machine.reporter.error(
+            'The "%s" admonition is empty; content required.' % (name),
+            nodes.literal_block(block_text, block_text), line=lineno)
+        return [error]
+    text = '\n'.join(content)
+    admonition_node = node_class(text)
+    if arguments:
+        title_text = arguments[0]
+        textnodes, messages = state.inline_text(title_text, lineno)
+        admonition_node += nodes.title(title_text, '', *textnodes)
+        admonition_node += messages
+        if options.has_key('class'):
+            class_value = options['class']
+        else:
+            class_value = 'admonition-' + nodes.make_id(title_text)
+        admonition_node.set_class(class_value)
+    state.nested_parse(content, content_offset, admonition_node)
+    return [admonition_node]
+
+def admonition(*args):
+    return make_admonition(nodes.admonition, *args)
+
+admonition.arguments = (1, 0, 1)
+admonition.options = {'class': directives.class_option}
+admonition.content = 1
+
+def attention(*args):
+    return make_admonition(nodes.attention, *args)
+
+attention.content = 1
+
+def caution(*args):
+    return make_admonition(nodes.caution, *args)
+
+caution.content = 1
+
+def danger(*args):
+    return make_admonition(nodes.danger, *args)
+
+danger.content = 1
+
+def error(*args):
+    return make_admonition(nodes.error, *args)
+
+error.content = 1
+
+def hint(*args):
+    return make_admonition(nodes.hint, *args)
+
+hint.content = 1
+
+def important(*args):
+    return make_admonition(nodes.important, *args)
+
+important.content = 1
+
+def note(*args):
+    return make_admonition(nodes.note, *args)
+
+note.content = 1
+
+def tip(*args):
+    return make_admonition(nodes.tip, *args)
+
+tip.content = 1
+
+def warning(*args):
+    return make_admonition(nodes.warning, *args)
+
+warning.content = 1

Added: trunk/www/utils/helpers/docutils/docutils/parsers/rst/directives/body.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/parsers/rst/directives/body.py    
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/parsers/rst/directives/body.py    
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,122 @@
+# Author: David Goodger
+# Contact: address@hidden
+# Revision: $Revision: 1.12 $
+# Date: $Date: 2003/06/03 02:15:44 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+Directives for additional body elements.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+import sys
+from docutils import nodes
+from docutils.parsers.rst import directives
+
+
+def topic(name, arguments, options, content, lineno,
+          content_offset, block_text, state, state_machine,
+          node_class=nodes.topic):
+    if not state_machine.match_titles:
+        error = state_machine.reporter.error(
+              'The "%s" directive may not be used within topics, sidebars, '
+              'or body elements.' % name,
+              nodes.literal_block(block_text, block_text), line=lineno)
+        return [error]
+    if not content:
+        warning = state_machine.reporter.warning(
+            'Content block expected for the "%s" directive; none found.'
+            % name, nodes.literal_block(block_text, block_text),
+            line=lineno)
+        return [warning]
+    title_text = arguments[0]
+    textnodes, messages = state.inline_text(title_text, lineno)
+    titles = [nodes.title(title_text, '', *textnodes)]
+    if options.has_key('subtitle'):
+        textnodes, more_messages = state.inline_text(options['subtitle'],
+                                                     lineno)
+        titles.append(nodes.subtitle(options['subtitle'], '', *textnodes))
+        messages.extend(more_messages)
+    text = '\n'.join(content)
+    node = node_class(text, *(titles + messages))
+    if options.has_key('class'):
+        node.set_class(options['class'])
+    if text:
+        state.nested_parse(content, content_offset, node)
+    return [node]
+
+topic.arguments = (1, 0, 1)
+topic.options = {'class': directives.class_option}
+topic.content = 1
+
+def sidebar(name, arguments, options, content, lineno,
+            content_offset, block_text, state, state_machine):
+    return topic(name, arguments, options, content, lineno,
+                 content_offset, block_text, state, state_machine,
+                 node_class=nodes.sidebar)
+
+sidebar.arguments = (1, 0, 1)
+sidebar.options = {'subtitle': directives.unchanged,
+                   'class': directives.class_option}
+sidebar.content = 1
+
+def line_block(name, arguments, options, content, lineno,
+               content_offset, block_text, state, state_machine,
+               node_class=nodes.line_block):
+    if not content:
+        warning = state_machine.reporter.warning(
+            'Content block expected for the "%s" directive; none found.'
+            % name, nodes.literal_block(block_text, block_text), line=lineno)
+        return [warning]
+    text = '\n'.join(content)
+    text_nodes, messages = state.inline_text(text, lineno)
+    node = node_class(text, '', *text_nodes, **options)
+    return [node] + messages
+
+line_block.options = {'class': directives.class_option}
+line_block.content = 1
+
+def parsed_literal(name, arguments, options, content, lineno,
+                   content_offset, block_text, state, state_machine):
+    return line_block(name, arguments, options, content, lineno,
+                      content_offset, block_text, state, state_machine,
+                      node_class=nodes.literal_block)
+
+parsed_literal.options = {'class': directives.class_option}
+parsed_literal.content = 1
+
+def rubric(name, arguments, options, content, lineno,
+             content_offset, block_text, state, state_machine):
+    rubric_text = arguments[0]
+    textnodes, messages = state.inline_text(rubric_text, lineno)
+    rubric = nodes.rubric(rubric_text, '', *textnodes, **options)
+    return [rubric] + messages
+
+rubric.arguments = (1, 0, 1)
+rubric.options = {'class': directives.class_option}
+
+def epigraph(name, arguments, options, content, lineno,
+             content_offset, block_text, state, state_machine):
+    block_quote, messages = state.block_quote(content, content_offset)
+    block_quote.set_class('epigraph')
+    return [block_quote] + messages
+
+epigraph.content = 1
+
+def highlights(name, arguments, options, content, lineno,
+             content_offset, block_text, state, state_machine):
+    block_quote, messages = state.block_quote(content, content_offset)
+    block_quote.set_class('highlights')
+    return [block_quote] + messages
+
+highlights.content = 1
+
+def pull_quote(name, arguments, options, content, lineno,
+             content_offset, block_text, state, state_machine):
+    block_quote, messages = state.block_quote(content, content_offset)
+    block_quote.set_class('pull-quote')
+    return [block_quote] + messages
+
+pull_quote.content = 1

Added: trunk/www/utils/helpers/docutils/docutils/parsers/rst/directives/html.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/parsers/rst/directives/html.py    
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/parsers/rst/directives/html.py    
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,96 @@
+# Author: David Goodger
+# Contact: address@hidden
+# Revision: $Revision: 1.9 $
+# Date: $Date: 2002/10/24 00:56:57 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+Directives for typically HTML-specific constructs.
+"""
+
+__docformat__ = 'reStructuredText'
+
+import sys
+from docutils import nodes, utils
+from docutils.parsers.rst import states
+from docutils.transforms import components
+
+
+def meta(name, arguments, options, content, lineno,
+         content_offset, block_text, state, state_machine):
+    node = nodes.Element()
+    if content:
+        new_line_offset, blank_finish = state.nested_list_parse(
+              content, content_offset, node, initial_state='MetaBody',
+              blank_finish=1, state_machine_kwargs=metaSMkwargs)
+        if (new_line_offset - content_offset) != len(content):
+            # incomplete parse of block?
+            error = state_machine.reporter.error(
+                'Invalid meta directive.',
+                nodes.literal_block(block_text, block_text), line=lineno)
+            node += error
+    else:
+        error = state_machine.reporter.error(
+            'Empty meta directive.',
+            nodes.literal_block(block_text, block_text), line=lineno)
+        node += error
+    return node.get_children()
+
+meta.content = 1
+
+def imagemap(name, arguments, options, content, lineno,
+             content_offset, block_text, state, state_machine):
+    return []
+
+
+class MetaBody(states.SpecializedBody):
+
+    class meta(nodes.Special, nodes.PreBibliographic, nodes.Element):
+        """HTML-specific "meta" element."""
+        pass
+
+    def field_marker(self, match, context, next_state):
+        """Meta element."""
+        node, blank_finish = self.parsemeta(match)
+        self.parent += node
+        return [], next_state, []
+
+    def parsemeta(self, match):
+        name = self.parse_field_marker(match)
+        indented, indent, line_offset, blank_finish = \
+              self.state_machine.get_first_known_indented(match.end())
+        node = self.meta()
+        pending = nodes.pending(components.Filter,
+                                {'component': 'writer',
+                                 'format': 'html',
+                                 'nodes': [node]})
+        node['content'] = ' '.join(indented)
+        if not indented:
+            line = self.state_machine.line
+            msg = self.reporter.info(
+                  'No content for meta tag "%s".' % name,
+                  nodes.literal_block(line, line),
+                  line=self.state_machine.abs_line_number())
+            return msg, blank_finish
+        tokens = name.split()
+        try:
+            attname, val = utils.extract_name_value(tokens[0])[0]
+            node[attname.lower()] = val
+        except utils.NameValueError:
+            node['name'] = tokens[0]
+        for token in tokens[1:]:
+            try:
+                attname, val = utils.extract_name_value(token)[0]
+                node[attname.lower()] = val
+            except utils.NameValueError, detail:
+                line = self.state_machine.line
+                msg = self.reporter.error(
+                      'Error parsing meta tag attribute "%s": %s.'
+                      % (token, detail), nodes.literal_block(line, line),
+                      line=self.state_machine.abs_line_number())
+                return msg, blank_finish
+        self.document.note_pending(pending)
+        return pending, blank_finish
+
+
+metaSMkwargs = {'state_classes': (MetaBody,)}

Added: 
trunk/www/utils/helpers/docutils/docutils/parsers/rst/directives/images.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/parsers/rst/directives/images.py  
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/parsers/rst/directives/images.py  
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,100 @@
+# Author: David Goodger
+# Contact: address@hidden
+# Revision: $Revision: 1.12 $
+# Date: $Date: 2003/05/24 20:47:43 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+Directives for figures and simple images.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+import sys
+from docutils import nodes, utils
+from docutils.parsers.rst import directives
+
+try:
+    import Image                        # PIL
+except ImportError:
+    Image = None
+
+align_values = ('top', 'middle', 'bottom', 'left', 'center', 'right')
+
+def align(argument):
+    return directives.choice(argument, align_values)
+
+def image(name, arguments, options, content, lineno,
+          content_offset, block_text, state, state_machine):
+    reference = ''.join(arguments[0].split('\n'))
+    if reference.find(' ') != -1:
+        error = state_machine.reporter.error(
+              'Image URI contains whitespace.',
+              nodes.literal_block(block_text, block_text), line=lineno)
+        return [error]
+    options['uri'] = reference
+    image_node = nodes.image(block_text, **options)
+    return [image_node]
+
+image.arguments = (1, 0, 1)
+image.options = {'alt': directives.unchanged,
+                 'height': directives.nonnegative_int,
+                 'width': directives.nonnegative_int,
+                 'scale': directives.nonnegative_int,
+                 'align': align,
+                 'class': directives.class_option}
+
+def figure(name, arguments, options, content, lineno,
+           content_offset, block_text, state, state_machine):
+    figwidth = options.setdefault('figwidth')
+    figclass = options.setdefault('figclass')
+    del options['figwidth']
+    del options['figclass']
+    (image_node,) = image(name, arguments, options, content, lineno,
+                         content_offset, block_text, state, state_machine)
+    if isinstance(image_node, nodes.system_message):
+        return [image_node]
+    figure_node = nodes.figure('', image_node)
+    if figwidth == 'image':
+        if Image:
+            # PIL doesn't like Unicode paths:
+            try:
+                i = Image.open(str(image_node['uri']))
+            except (IOError, UnicodeError):
+                pass
+            else:
+                figure_node['width'] = i.size[0]
+    elif figwidth is not None:
+        figure_node['width'] = figwidth
+    if figclass:
+        figure_node.set_class(figclass)
+    if content:
+        node = nodes.Element()          # anonymous container for parsing
+        state.nested_parse(content, content_offset, node)
+        first_node = node[0]
+        if isinstance(first_node, nodes.paragraph):
+            caption = nodes.caption(first_node.rawsource, '',
+                                    *first_node.children)
+            figure_node += caption
+        elif not (isinstance(first_node, nodes.comment)
+                  and len(first_node) == 0):
+            error = state_machine.reporter.error(
+                  'Figure caption must be a paragraph or empty comment.',
+                  nodes.literal_block(block_text, block_text), line=lineno)
+            return [figure_node, error]
+        if len(node) > 1:
+            figure_node += nodes.legend('', *node[1:])
+    return [figure_node]
+
+def figwidth_value(argument):
+    if argument.lower() == 'image':
+        return 'image'
+    else:
+        return directives.nonnegative_int(argument)
+
+figure.arguments = (1, 0, 1)
+figure.options = {'figwidth': figwidth_value,
+                  'figclass': directives.class_option}
+figure.options.update(image.options)
+figure.content = 1

Added: trunk/www/utils/helpers/docutils/docutils/parsers/rst/directives/misc.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/parsers/rst/directives/misc.py    
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/parsers/rst/directives/misc.py    
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,231 @@
+# Authors: David Goodger, Dethe Elza
+# Contact: address@hidden
+# Revision: $Revision: 1.14 $
+# Date: $Date: 2003/06/22 22:21:28 $
+# Copyright: This module has been placed in the public domain.
+
+"""Miscellaneous directives."""
+
+__docformat__ = 'reStructuredText'
+
+import sys
+import os.path
+import re
+from urllib2 import urlopen, URLError
+from docutils import io, nodes, statemachine, utils
+from docutils.parsers.rst import directives, states
+from docutils.transforms import misc
+
+
+def include(name, arguments, options, content, lineno,
+            content_offset, block_text, state, state_machine):
+    """Include a reST file as part of the content of this reST file."""
+    source = state_machine.input_lines.source(
+        lineno - state_machine.input_offset - 1)
+    source_dir = os.path.dirname(os.path.abspath(source))
+    path = ''.join(arguments[0].splitlines())
+    if path.find(' ') != -1:
+        error = state_machine.reporter.error(
+              '"%s" directive path contains whitespace.' % name,
+              nodes.literal_block(block_text, block_text), line=lineno)
+        return [error]
+    path = os.path.normpath(os.path.join(source_dir, path))
+    path = utils.relative_path(None, path)
+    try:
+        include_file = io.FileInput(
+            source_path=path, encoding=state.document.settings.input_encoding)
+    except IOError, error:
+        severe = state_machine.reporter.severe(
+              'Problems with "%s" directive path:\n%s.' % (name, error),
+              nodes.literal_block(block_text, block_text), line=lineno)
+        return [severe]
+    include_text = include_file.read()
+    if options.has_key('literal'):
+        literal_block = nodes.literal_block(include_text, include_text,
+                                            source=path)
+        literal_block.line = 1
+        return literal_block
+    else:
+        include_lines = statemachine.string2lines(include_text,
+                                                  convert_whitespace=1)
+        state_machine.insert_input(include_lines, path)
+        return []
+
+include.arguments = (1, 0, 1)
+include.options = {'literal': directives.flag}
+
+def raw(name, arguments, options, content, lineno,
+        content_offset, block_text, state, state_machine):
+    """
+    Pass through content unchanged
+
+    Content is included in output based on type argument
+
+    Content may be included inline (content section of directive) or
+    imported from a file or url.
+    """
+    attributes = {'format': arguments[0]}
+    if content:
+        if options.has_key('file') or options.has_key('url'):
+            error = state_machine.reporter.error(
+                  '"%s" directive may not both specify an external file and '
+                  'have content.' % name,
+                  nodes.literal_block(block_text, block_text), line=lineno)
+            return [error]
+        text = '\n'.join(content)
+    elif options.has_key('file'):
+        if options.has_key('url'):
+            error = state_machine.reporter.error(
+                  'The "file" and "url" options may not be simultaneously '
+                  'specified for the "%s" directive.' % name,
+                  nodes.literal_block(block_text, block_text), line=lineno)
+            return [error]
+        source_dir = os.path.dirname(
+            os.path.abspath(state.document.current_source))
+        path = os.path.normpath(os.path.join(source_dir, options['file']))
+        path = utils.relative_path(None, path)
+        try:
+            raw_file = open(path)
+        except IOError, error:
+            severe = state_machine.reporter.severe(
+                  'Problems with "%s" directive path:\n%s.' % (name, error),
+                  nodes.literal_block(block_text, block_text), line=lineno)
+            return [severe]
+        text = raw_file.read()
+        raw_file.close()
+        attributes['source'] = path
+    elif options.has_key('url'):
+        try:
+            raw_file = urlopen(options['url'])
+        except (URLError, IOError, OSError), error:
+            severe = state_machine.reporter.severe(
+                  'Problems with "%s" directive URL "%s":\n%s.'
+                  % (name, options['url'], error),
+                  nodes.literal_block(block_text, block_text), line=lineno)
+            return [severe]
+        text = raw_file.read()
+        raw_file.close()        
+        attributes['source'] = options['file']
+    else:
+        error = state_machine.reporter.warning(
+            'The "%s" directive requires content; none supplied.' % (name),
+            nodes.literal_block(block_text, block_text), line=lineno)
+        return [error]
+    raw_node = nodes.raw('', text, **attributes)
+    return [raw_node]
+
+raw.arguments = (1, 0, 1)
+raw.options = {'file': directives.path,
+               'url': directives.path}
+raw.content = 1
+
+def replace(name, arguments, options, content, lineno,
+            content_offset, block_text, state, state_machine):
+    if not isinstance(state, states.SubstitutionDef):
+        error = state_machine.reporter.error(
+            'Invalid context: the "%s" directive can only be used within a '
+            'substitution definition.' % (name),
+            nodes.literal_block(block_text, block_text), line=lineno)
+        return [error]
+    text = '\n'.join(content)
+    element = nodes.Element(text)
+    if text:
+        state.nested_parse(content, content_offset, element)
+        if len(element) != 1 or not isinstance(element[0], nodes.paragraph):
+            messages = []
+            for node in element:
+                if isinstance(node, nodes.system_message):
+                    if node.has_key('backrefs'):
+                        del node['backrefs']
+                    messages.append(node)
+            error = state_machine.reporter.error(
+                'Error in "%s" directive: may contain a single paragraph '
+                'only.' % (name), line=lineno)
+            messages.append(error)
+            return messages
+        else:
+            return element[0].children
+    else:
+        error = state_machine.reporter.error(
+            'The "%s" directive is empty; content required.' % (name),
+            line=lineno)
+        return [error]
+
+replace.content = 1
+
+def unicode_directive(name, arguments, options, content, lineno,
+                         content_offset, block_text, state, state_machine):
+    r"""
+    Convert Unicode character codes (numbers) to characters.  Codes may be
+    decimal numbers, hexadecimal numbers (prefixed by ``0x``, ``x``, ``\x``,
+    ``U+``, ``u``, or ``\u``; e.g. ``U+262E``), or XML-style numeric character
+    entities (e.g. ``&#x262E;``).  Text following ".." is a comment and is
+    ignored.  Spaces are ignored, and any other text remains as-is.
+    """
+    if not isinstance(state, states.SubstitutionDef):
+        error = state_machine.reporter.error(
+            'Invalid context: the "%s" directive can only be used within a '
+            'substitution definition.' % (name),
+            nodes.literal_block(block_text, block_text), line=lineno)
+        return [error]
+    codes = arguments[0].split('.. ')[0].split()
+    element = nodes.Element()
+    for code in codes:
+        try:
+            if code.isdigit():
+                element += nodes.Text(unichr(int(code)))
+            else:
+                match = unicode_pattern.match(code)
+                if match:
+                    value = match.group(1) or match.group(2)
+                    element += nodes.Text(unichr(int(value, 16)))
+                else:
+                    element += nodes.Text(code)
+        except ValueError, err:
+            error = state_machine.reporter.error(
+                'Invalid character code: %s\n%s' % (code, err),
+                nodes.literal_block(block_text, block_text), line=lineno)
+            return [error]
+    return element.children
+
+unicode_directive.arguments = (1, 0, 1)
+unicode_pattern = re.compile(
+    r'(?:0x|x|\x00x|U\+?|\x00u)([0-9a-f]+)$|&#x([0-9a-f]+);$', re.IGNORECASE)
+
+def class_directive(name, arguments, options, content, lineno,
+                       content_offset, block_text, state, state_machine):
+    """"""
+    class_value = nodes.make_id(arguments[0])
+    if class_value:
+        pending = nodes.pending(misc.ClassAttribute,
+                                {'class': class_value, 'directive': name},
+                                block_text)
+        state_machine.document.note_pending(pending)
+        return [pending]
+    else:
+        error = state_machine.reporter.error(
+            'Invalid class attribute value for "%s" directive: %s'
+            % (name, arguments[0]),
+            nodes.literal_block(block_text, block_text), line=lineno)
+        return [error]
+
+class_directive.arguments = (1, 0, 0)
+class_directive.content = 1
+
+def directive_test_function(name, arguments, options, content, lineno,
+                            content_offset, block_text, state, state_machine):
+    if content:
+        text = '\n'.join(content)
+        info = state_machine.reporter.info(
+            'Directive processed. Type="%s", arguments=%r, options=%r, '
+            'content:' % (name, arguments, options),
+            nodes.literal_block(text, text), line=lineno)
+    else:
+        info = state_machine.reporter.info(
+            'Directive processed. Type="%s", arguments=%r, options=%r, '
+            'content: None' % (name, arguments, options), line=lineno)
+    return [info]
+
+directive_test_function.arguments = (0, 1, 1)
+directive_test_function.options = {'option': directives.unchanged}
+directive_test_function.content = 1

Added: trunk/www/utils/helpers/docutils/docutils/parsers/rst/directives/parts.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/parsers/rst/directives/parts.py   
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/parsers/rst/directives/parts.py   
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,56 @@
+# Author: David Goodger, Dmitry Jemerov
+# Contact: address@hidden
+# Revision: $Revision: 1.11 $
+# Date: $Date: 2003/05/24 20:48:20 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+Directives for document parts.
+"""
+
+__docformat__ = 'reStructuredText'
+
+from docutils import nodes
+from docutils.transforms import parts
+from docutils.parsers.rst import directives
+
+
+backlinks_values = ('top', 'entry', 'none')
+
+def backlinks(arg):
+    value = directives.choice(arg, backlinks_values)
+    if value == 'none':
+        return None
+    else:
+        return value
+
+def contents(name, arguments, options, content, lineno,
+             content_offset, block_text, state, state_machine):
+    """Table of contents."""
+    if arguments:
+        title_text = arguments[0]
+        text_nodes, messages = state.inline_text(title_text, lineno)
+        title = nodes.title(title_text, '', *text_nodes)
+    else:
+        messages = []
+        title = None
+    pending = nodes.pending(parts.Contents, {'title': title}, block_text)
+    pending.details.update(options)
+    state_machine.document.note_pending(pending)
+    return [pending] + messages
+
+contents.arguments = (0, 1, 1)
+contents.options = {'depth': directives.nonnegative_int,
+                    'local': directives.flag,
+                    'backlinks': backlinks,
+                    'class': directives.class_option}
+
+def sectnum(name, arguments, options, content, lineno,
+            content_offset, block_text, state, state_machine):
+    """Automatic section numbering."""
+    pending = nodes.pending(parts.SectNum)
+    pending.details.update(options)
+    state_machine.document.note_pending(pending)
+    return [pending]
+
+sectnum.options = {'depth': int}

Added: 
trunk/www/utils/helpers/docutils/docutils/parsers/rst/directives/references.py
===================================================================
--- 
trunk/www/utils/helpers/docutils/docutils/parsers/rst/directives/references.py  
    2004-03-07 00:58:55 UTC (rev 5248)
+++ 
trunk/www/utils/helpers/docutils/docutils/parsers/rst/directives/references.py  
    2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,23 @@
+# Author: David Goodger, Dmitry Jemerov
+# Contact: address@hidden
+# Revision: $Revision: 1.5 $
+# Date: $Date: 2002/10/24 00:57:32 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+Directives for references and targets.
+"""
+
+__docformat__ = 'reStructuredText'
+
+from docutils import nodes
+from docutils.transforms import references
+
+
+def target_notes(name, arguments, options, content, lineno,
+                 content_offset, block_text, state, state_machine):
+    """Target footnote generation."""
+    pending = nodes.pending(references.TargetNotes)
+    state_machine.document.note_pending(pending)
+    nodelist = [pending]
+    return nodelist

Added: 
trunk/www/utils/helpers/docutils/docutils/parsers/rst/languages/__init__.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/parsers/rst/languages/__init__.py 
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/parsers/rst/languages/__init__.py 
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,24 @@
+# Author: David Goodger
+# Contact: address@hidden
+# Revision: $Revision: 1.5 $
+# Date: $Date: 2002/11/14 02:25:38 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+This package contains modules for language-dependent features of
+reStructuredText.
+"""
+
+__docformat__ = 'reStructuredText'
+
+_languages = {}
+
+def get_language(language_code):
+    if _languages.has_key(language_code):
+        return _languages[language_code]
+    try:
+        module = __import__(language_code, globals(), locals())
+    except ImportError:
+        return None
+    _languages[language_code] = module
+    return module

Added: trunk/www/utils/helpers/docutils/docutils/parsers/rst/languages/de.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/parsers/rst/languages/de.py       
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/parsers/rst/languages/de.py       
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,56 @@
+# -*- coding: iso-8859-1 -*-
+# Author: Engelbert Gruber
+# Contact: address@hidden
+# Revision: $Revision: 1.10 $
+# Date: $Date: 2003/06/17 06:15:49 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+German-language mappings for language-dependent features of
+reStructuredText.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+directives = {
+      'achtung': 'attention',
+      'vorsicht': 'caution',
+      'gefahr': 'danger',
+      'fehler': 'error',
+      'hinweis': 'hint',
+      'wichtig': 'important',
+      'notiz': 'note',
+      'tip': 'tip',
+      'warnung': 'warning',
+      'ermahnung': 'admonition',
+      'kasten': 'sidebar', # seitenkasten ?
+      'thema': 'topic', 
+      'line-block': 'line-block',
+      'parsed-literal': 'parsed-literal',
+      'rubrik': 'rubric',
+      'epigraph (translation required)': 'epigraph',
+      'highlights (translation required)': 'highlights',
+      'pull-quote (translation required)': 'pull-quote', # kasten too ?
+      #'questions': 'questions',
+      #'qa': 'questions',
+      #'faq': 'questions',
+      'meta': 'meta',
+      #'imagemap': 'imagemap',
+      'bild': 'image',
+      'abbildung': 'figure',
+      'raw': 'raw',         # unbearbeitet
+      'include': 'include', # einf�gen, "f�ge ein" would be more like a 
command.
+                            # einf�gung would be the noun. 
+      'ersetzung': 'replace', # ersetzen, ersetze
+      'unicode': 'unicode',
+      'klasse': 'class',    # offer class too ?
+      'inhalt': 'contents',
+      'sectnum': 'sectnum',
+      'section-numbering': 'sectnum',
+      'target-notes': 'target-notes',
+      #'footnotes': 'footnotes',
+      #'citations': 'citations',
+      'restructuredtext-test-directive': 'restructuredtext-test-directive'}
+"""English name to registered (in directives/__init__.py) directive name
+mapping."""

Added: trunk/www/utils/helpers/docutils/docutils/parsers/rst/languages/en.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/parsers/rst/languages/en.py       
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/parsers/rst/languages/en.py       
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,87 @@
+# Author: David Goodger
+# Contact: address@hidden
+# Revision: $Revision: 1.13 $
+# Date: $Date: 2003/06/05 15:15:41 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+English-language mappings for language-dependent features of
+reStructuredText.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+directives = {
+      'attention': 'attention',
+      'caution': 'caution',
+      'danger': 'danger',
+      'error': 'error',
+      'hint': 'hint',
+      'important': 'important',
+      'note': 'note',
+      'tip': 'tip',
+      'warning': 'warning',
+      'admonition': 'admonition',
+      'sidebar': 'sidebar',
+      'topic': 'topic',
+      'line-block': 'line-block',
+      'parsed-literal': 'parsed-literal',
+      'rubric': 'rubric',
+      'epigraph': 'epigraph',
+      'highlights': 'highlights',
+      'pull-quote': 'pull-quote',
+      #'questions': 'questions',
+      #'qa': 'questions',
+      #'faq': 'questions',
+      'meta': 'meta',
+      #'imagemap': 'imagemap',
+      'image': 'image',
+      'figure': 'figure',
+      'include': 'include',
+      'raw': 'raw',
+      'replace': 'replace',
+      'unicode': 'unicode',
+      'class': 'class',
+      'contents': 'contents',
+      'sectnum': 'sectnum',
+      'section-numbering': 'sectnum',
+      #'footnotes': 'footnotes',
+      #'citations': 'citations',
+      'target-notes': 'target-notes',
+      'restructuredtext-test-directive': 'restructuredtext-test-directive'}
+"""English name to registered (in directives/__init__.py) directive name
+mapping."""
+
+roles = {
+    'abbreviation': 'abbreviation',
+    'ab': 'abbreviation',
+    'acronym': 'acronym',
+    'ac': 'acronym',
+    'index': 'index',
+    'i': 'index',
+    'subscript': 'subscript',
+    'sub': 'subscript',
+    'superscript': 'superscript',
+    'sup': 'superscript',
+    'title-reference': 'title-reference',
+    'title': 'title-reference',
+    't': 'title-reference',
+    'pep-reference': 'pep-reference',
+    'pep': 'pep-reference',
+    'rfc-reference': 'rfc-reference',
+    'rfc': 'rfc-reference',
+    'emphasis': 'emphasis',
+    'strong': 'strong',
+    'literal': 'literal',
+    'named-reference': 'named-reference',
+    'anonymous-reference': 'anonymous-reference',
+    'footnote-reference': 'footnote-reference',
+    'citation-reference': 'citation-reference',
+    'substitution-reference': 'substitution-reference',
+    'target': 'target',
+    'uri-reference': 'uri-reference',
+    'uri': 'uri-reference',
+    'url': 'uri-reference',}
+"""Mapping of English role names to canonical role names for interpreted text.
+"""

Added: trunk/www/utils/helpers/docutils/docutils/parsers/rst/languages/es.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/parsers/rst/languages/es.py       
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/parsers/rst/languages/es.py       
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,95 @@
+# Author: Marcelo Huerta San Mart�n
+# Contact: address@hidden
+# Revision: $Revision: 1.5 $
+# Date: $Date: 2003/06/05 00:42:44 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+Spanish-language mappings for language-dependent features of
+reStructuredText.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+directives = {
+      u'atenci\u00f3n': 'attention',
+      u'atencion': 'attention',
+      u'precauci\u00f3n': 'caution',
+      u'precaucion': 'caution',
+      u'peligro': 'danger',
+      u'error': 'error',
+      u'sugerencia': 'hint',
+      u'importante': 'important',
+      u'nota': 'note',
+      u'consejo': 'tip',
+      u'advertencia': 'warning',
+      u'exhortacion': 'admonition',
+      u'exhortaci\u00f3n': 'admonition',
+      u'nota-al-margen': 'sidebar',
+      u'tema': 'topic',
+      u'bloque-de-lineas': 'line-block',
+      u'bloque-de-l\u00edneas': 'line-block',
+      u'literal-evaluado': 'parsed-literal',
+      u'firma': 'rubric',
+      u'ep\u00edgrafe': 'epigraph',
+      u'epigrafe': 'epigraph',
+      u'destacado': 'highlights',
+      u'cita-destacada': 'pull-quote',
+      #'questions': 'questions',
+      #'qa': 'questions',
+      #'faq': 'questions',
+      u'meta': 'meta',
+      #'imagemap': 'imagemap',
+      u'imagen': 'image',
+      u'figura': 'figure',
+      u'incluir': 'include',
+      u'raw': 'raw',
+      u'reemplazar': 'replace',
+      u'unicode': 'unicode',
+      u'clase': 'class',
+      u'contenido': 'contents',
+      u'numseccion': 'sectnum',
+      u'numsecci\u00f3n': 'sectnum',
+      u'numeracion-seccion': 'sectnum',
+      u'numeraci\u00f3n-secci\u00f3n': 'sectnum',
+      u'notas-destino': 'target-notes',
+      #'footnotes': 'footnotes',
+      #'citations': 'citations',
+      u'restructuredtext-test-directive': 'restructuredtext-test-directive'}
+"""English name to registered (in directives/__init__.py) directive name
+mapping."""
+
+roles = {
+    u'abreviatura': 'abbreviation',
+    u'ab': 'abbreviation',
+    u'acronimo': 'acronym',
+    u'acronimo': 'acronym',
+    u'ac': 'acronym',
+    u'indice': 'index',
+    u'i': 'index',
+    u'referencia-titulo': 'title-reference',
+    u'titulo': 'title-reference',
+    u't': 'title-reference',
+    u'referencia-pep': 'pep-reference',
+    u'pep': 'pep-reference',
+    u'referencia-rfc': 'rfc-reference',
+    u'rfc': 'rfc-reference',
+    u'enfasis': 'emphasis',
+    u'\u00e9nfasis': 'emphasis',
+    u'destacado': 'strong',
+    u'literal': 'literal',
+    u'referencia-con-nombre': 'named-reference',
+    u'referencia-anonima': 'anonymous-reference',
+    u'referencia-an\u00f3nima': 'anonymous-reference',
+    u'referencia-nota-al-pie': 'footnote-reference',
+    u'referencia-cita': 'citation-reference',
+    u'referencia-sustitucion': 'substitution-reference',
+    u'referencia-sustituci\u00f3n': 'substitution-reference',
+    u'destino': 'target',
+    u'referencia-uri': 'uri-reference',
+    u'uri': 'uri-reference',
+    u'url': 'uri-reference',
+    }
+"""Mapping of Spanish role names to canonical role names for interpreted text.
+"""

Added: trunk/www/utils/helpers/docutils/docutils/parsers/rst/languages/fr.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/parsers/rst/languages/fr.py       
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/parsers/rst/languages/fr.py       
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,57 @@
+# Authors: David Goodger; William Dode
+# Contact: address@hidden
+# Revision: $Revision: 1.9 $
+# Date: $Date: 2003/06/04 21:55:55 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+French-language mappings for language-dependent features of
+reStructuredText.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+directives = {
+      u'attention': 'attention',
+      u'pr\u00E9caution': 'caution',
+      u'danger': 'danger',
+      u'erreur': 'error',
+      u'conseil': 'hint',
+      u'important': 'important',
+      u'note': 'note',
+      u'astuce': 'tip',
+      u'avertissement': 'warning',
+      u'admonition': 'admonition',
+      u'encadr\u00E9': 'sidebar',
+      u'sujet': 'topic',
+      u'bloc-textuel': 'line-block',
+      u'bloc-interpr\u00E9t\u00E9': 'parsed-literal',
+      u'code-interpr\u00E9t\u00E9': 'parsed-literal',
+      u'intertitre': 'rubric',
+      u'exergue': 'epigraph',
+      u'\u00E9pigraphe': 'epigraph',
+      u'chapeau': 'highlights',
+      u'accroche': 'pull-quote',
+      #u'questions': 'questions',
+      #u'qr': 'questions',
+      #u'faq': 'questions',
+      u'meta': 'meta',
+      #u'imagemap (translation required)': 'imagemap',
+      u'image': 'image',
+      u'figure': 'figure',
+      u'inclure': 'include',
+      u'brut': 'raw',
+      u'remplacer': 'replace',
+      u'unicode': 'unicode',
+      u'classe': 'class',
+      u'sommaire': 'contents',
+      u'table-des-mati\u00E8res': 'contents',
+      u'sectnum': 'sectnum',
+      u'section-num\u00E9rot\u00E9e': 'sectnum',
+      u'liens': 'target-notes',
+      #u'footnotes (translation required)': 'footnotes',
+      #u'citations (translation required)': 'citations',
+      }
+"""French name to registered (in directives/__init__.py) directive name
+mapping."""

Added: trunk/www/utils/helpers/docutils/docutils/parsers/rst/languages/it.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/parsers/rst/languages/it.py       
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/parsers/rst/languages/it.py       
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,54 @@
+# Author: Nicola Larosa
+# Contact: address@hidden
+# Revision: $Revision: 1.4 $
+# Date: $Date: 2003/06/03 02:15:45 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+Italian-language mappings for language-dependent features of
+reStructuredText.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+directives = {
+      'attenzione': 'attention',
+      'cautela': 'caution',
+      'pericolo': 'danger',
+      'errore': 'error',
+      'suggerimento': 'hint',
+      'importante': 'important',
+      'nota': 'note',
+      'consiglio': 'tip',
+      'avvertenza': 'warning',
+      'admonition (translation required)': 'admonition',
+      'sidebar (translation required)': 'sidebar',
+      'argomento': 'topic',
+      'blocco di linee': 'line-block',
+      'parsed-literal': 'parsed-literal',
+      'rubric (translation required)': 'rubric',
+      'epigraph (translation required)': 'epigraph',
+      'highlights (translation required)': 'highlights',
+      'pull-quote (translation required)': 'pull-quote',
+      #'questions': 'questions',
+      #'qa': 'questions',
+      #'faq': 'questions',
+      'meta': 'meta',
+      #'imagemap': 'imagemap',
+      'immagine': 'image',
+      'figura': 'figure',
+      'includi': 'include',
+      'grezzo': 'raw',
+      'sostituisci': 'replace',
+      'unicode': 'unicode',
+      'class (translation required)': 'class',
+      'indice': 'contents',
+      'seznum': 'sectnum',
+      'section-numbering': 'sectnum',
+      'target-notes': 'target-notes',
+      #'footnotes': 'footnotes',
+      #'citations': 'citations',
+      'restructuredtext-test-directive': 'restructuredtext-test-directive'}
+"""English name to registered (in directives/__init__.py) directive name
+mapping."""

Added: trunk/www/utils/helpers/docutils/docutils/parsers/rst/languages/sk.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/parsers/rst/languages/sk.py       
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/parsers/rst/languages/sk.py       
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,54 @@
+# Author: Miroslav Vasko
+# Contact: address@hidden
+# Revision: $Revision: 1.5 $
+# Date: $Date: 2003/06/03 02:15:45 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+Slovak-language mappings for language-dependent features of
+reStructuredText.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+directives = {
+      u'pozor': 'attention',
+      u'opatrne': 'caution',
+      u'nebezpe\xe8enstvo': 'danger',
+      u'chyba': 'error',
+      u'rada': 'hint',
+      u'd\xf4le\x9eit\xe9': 'important',
+      u'pozn\xe1mka': 'note',
+      u'tip': 'tip',
+      u'varovanie': 'warning',
+      u'admonition (translation required)': 'admonition',
+      u'sidebar (translation required)': 'sidebar',
+      u't\xe9ma': 'topic',
+      u'blok-riadkov': 'line-block',
+      u'parsed-literal': 'parsed-literal',
+      u'rubric (translation required)': 'rubric',
+      u'epigraph (translation required)': 'epigraph',
+      u'highlights (translation required)': 'highlights',
+      u'pull-quote (translation required)': 'pull-quote',
+      #u'questions': 'questions',
+      #u'qa': 'questions',
+      #u'faq': 'questions',
+      u'meta': 'meta',
+      #u'imagemap': 'imagemap',
+      u'obr\xe1zok': 'image',
+      u'tvar': 'figure',
+      u'vlo\x9ei\x9d': 'include',
+      u'raw': 'raw',
+      u'nahradi\x9d': 'replace',
+      u'unicode': 'unicode',
+      u'class (translation required)': 'class',
+      u'obsah': 'contents',
+      u'\xe8as\x9d': 'sectnum',
+      u'\xe8as\x9d-\xe8\xedslovanie': 'sectnum',
+      u'cie\xbeov\xe9-pozn\xe1mky': 'target-notes',
+      #u'footnotes': 'footnotes',
+      #u'citations': 'citations',
+      }
+"""Slovak name to registered (in directives/__init__.py) directive name
+mapping."""

Added: trunk/www/utils/helpers/docutils/docutils/parsers/rst/languages/sv.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/parsers/rst/languages/sv.py       
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/parsers/rst/languages/sv.py       
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,53 @@
+# Author:    Adam Chodorowski
+# Contact:   address@hidden
+# Revision:  $Revision: 1.10 $
+# Date:      $Date: 2003/06/03 02:15:45 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+Swedish language mappings for language-dependent features of reStructuredText.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+directives = {
+      u'observera': 'attention',
+      u'caution (translation required)': 'caution',
+      u'fara': 'danger',
+      u'fel': 'error',
+      u'v\u00e4gledning': 'hint',
+      u'viktigt': 'important',
+      u'notera': 'note',
+      u'tips': 'tip',
+      u'varning': 'warning',
+      u'admonition (translation required)': 'admonition',
+      u'sidebar (translation required)': 'sidebar',
+      u'\u00e4mne': 'topic',
+      u'line-block (translation required)': 'line-block',
+      u'parsed-literal (translation required)': 'parsed-literal',
+      u'mellanrubrik': 'rubric',
+      u'epigraph (translation required)': 'epigraph',
+      u'highlights (translation required)': 'highlights',
+      u'pull-quote (translation required)': 'pull-quote',
+      # u'fr\u00e5gor': 'questions',
+      # NOTE: A bit long, but recommended by http://www.nada.kth.se/dataterm/:
+      # u'fr\u00e5gor-och-svar': 'questions',
+      # u'vanliga-fr\u00e5gor': 'questions',  
+      u'meta': 'meta',
+      # u'bildkarta': 'imagemap',   # FIXME: Translation might be too literal.
+      u'bild': 'image',
+      u'figur': 'figure',
+      u'inkludera': 'include',   
+      u'r\u00e5': 'raw',            # FIXME: Translation might be too literal.
+      u'ers\u00e4tt': 'replace', 
+      u'unicode': 'unicode',
+      u'class (translation required)': 'class',
+      u'inneh\u00e5ll': 'contents',
+      u'sektionsnumrering': 'sectnum',
+      u'target-notes (translation required)': 'target-notes',
+      # u'fotnoter': 'footnotes',
+      # u'citeringar': 'citations',
+      }
+"""Swedish name to registered (in directives/__init__.py) directive name
+mapping."""

Added: trunk/www/utils/helpers/docutils/docutils/parsers/rst/states.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/parsers/rst/states.py     
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/parsers/rst/states.py     
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,2913 @@
+# Author: David Goodger
+# Contact: address@hidden
+# Revision: $Revision: 1.50 $
+# Date: $Date: 2003/06/16 03:26:53 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+This is the ``docutils.parsers.restructuredtext.states`` module, the core of
+the reStructuredText parser.  It defines the following:
+
+:Classes:
+    - `RSTStateMachine`: reStructuredText parser's entry point.
+    - `NestedStateMachine`: recursive StateMachine.
+    - `RSTState`: reStructuredText State superclass.
+    - `Inliner`: For parsing inline markup.
+    - `Body`: Generic classifier of the first line of a block.
+    - `SpecializedBody`: Superclass for compound element members.
+    - `BulletList`: Second and subsequent bullet_list list_items
+    - `DefinitionList`: Second+ definition_list_items.
+    - `EnumeratedList`: Second+ enumerated_list list_items.
+    - `FieldList`: Second+ fields.
+    - `OptionList`: Second+ option_list_items.
+    - `RFC2822List`: Second+ RFC2822-style fields.
+    - `ExtensionOptions`: Parses directive option fields.
+    - `Explicit`: Second+ explicit markup constructs.
+    - `SubstitutionDef`: For embedded directives in substitution definitions.
+    - `Text`: Classifier of second line of a text block.
+    - `SpecializedText`: Superclass for continuation lines of Text-variants.
+    - `Definition`: Second line of potential definition_list_item.
+    - `Line`: Second line of overlined section title or transition marker.
+    - `Struct`: An auxiliary collection class.
+
+:Exception classes:
+    - `MarkupError`
+    - `ParserError`
+    - `MarkupMismatch`
+
+:Functions:
+    - `escape2null()`: Return a string, escape-backslashes converted to nulls.
+    - `unescape()`: Return a string, nulls removed or restored to backslashes.
+
+:Attributes:
+    - `state_classes`: set of State classes used with `RSTStateMachine`.
+
+Parser Overview
+===============
+
+The reStructuredText parser is implemented as a recursive state machine,
+examining its input one line at a time.  To understand how the parser works,
+please first become familiar with the `docutils.statemachine` module.  In the
+description below, references are made to classes defined in this module;
+please see the individual classes for details.
+
+Parsing proceeds as follows:
+
+1. The state machine examines each line of input, checking each of the
+   transition patterns of the state `Body`, in order, looking for a match.
+   The implicit transitions (blank lines and indentation) are checked before
+   any others.  The 'text' transition is a catch-all (matches anything).
+
+2. The method associated with the matched transition pattern is called.
+
+   A. Some transition methods are self-contained, appending elements to the
+      document tree (`Body.doctest` parses a doctest block).  The parser's
+      current line index is advanced to the end of the element, and parsing
+      continues with step 1.
+
+   B. Other transition methods trigger the creation of a nested state machine,
+      whose job is to parse a compound construct ('indent' does a block quote,
+      'bullet' does a bullet list, 'overline' does a section [first checking
+      for a valid section header], etc.).
+
+      - In the case of lists and explicit markup, a one-off state machine is
+        created and run to parse contents of the first item.
+
+      - A new state machine is created and its initial state is set to the
+        appropriate specialized state (`BulletList` in the case of the
+        'bullet' transition; see `SpecializedBody` for more detail).  This
+        state machine is run to parse the compound element (or series of
+        explicit markup elements), and returns as soon as a non-member element
+        is encountered.  For example, the `BulletList` state machine ends as
+        soon as it encounters an element which is not a list item of that
+        bullet list.  The optional omission of inter-element blank lines is
+        enabled by this nested state machine.
+
+      - The current line index is advanced to the end of the elements parsed,
+        and parsing continues with step 1.
+
+   C. The result of the 'text' transition depends on the next line of text.
+      The current state is changed to `Text`, under which the second line is
+      examined.  If the second line is:
+
+      - Indented: The element is a definition list item, and parsing proceeds
+        similarly to step 2.B, using the `DefinitionList` state.
+
+      - A line of uniform punctuation characters: The element is a section
+        header; again, parsing proceeds as in step 2.B, and `Body` is still
+        used.
+
+      - Anything else: The element is a paragraph, which is examined for
+        inline markup and appended to the parent element.  Processing
+        continues with step 1.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+import sys
+import re
+import roman
+from types import TupleType
+from docutils import nodes, statemachine, utils, urischemes
+from docutils import ApplicationError, DataError
+from docutils.statemachine import StateMachineWS, StateWS
+from docutils.nodes import fully_normalize_name as normalize_name
+from docutils.parsers.rst import directives, languages, tableparser
+from docutils.parsers.rst.languages import en as _fallback_language_module
+
+
+class MarkupError(DataError): pass
+class UnknownInterpretedRoleError(DataError): pass
+class InterpretedRoleNotImplementedError(DataError): pass
+class ParserError(ApplicationError): pass
+class MarkupMismatch(Exception): pass
+
+
+class Struct:
+
+    """Stores data attributes for dotted-attribute access."""
+
+    def __init__(self, **keywordargs):
+        self.__dict__.update(keywordargs)
+
+
+class RSTStateMachine(StateMachineWS):
+
+    """
+    reStructuredText's master StateMachine.
+
+    The entry point to reStructuredText parsing is the `run()` method.
+    """
+
+    def run(self, input_lines, document, input_offset=0, match_titles=1,
+            inliner=None):
+        """
+        Parse `input_lines` and return a `docutils.nodes.document` instance.
+
+        Extend `StateMachineWS.run()`: set up parse-global data, run the
+        StateMachine, and return the resulting
+        document.
+        """
+        self.language = languages.get_language(
+            document.settings.language_code)
+        self.match_titles = match_titles
+        if inliner is None:
+            inliner = Inliner()
+        inliner.init_customizations(document.settings)
+        self.memo = Struct(document=document,
+                           reporter=document.reporter,
+                           language=self.language,
+                           title_styles=[],
+                           section_level=0,
+                           section_bubble_up_kludge=0,
+                           inliner=inliner)
+        self.document = document
+        self.attach_observer(document.note_source)
+        self.reporter = self.memo.reporter
+        self.node = document
+        results = StateMachineWS.run(self, input_lines, input_offset,
+                                     input_source=document['source'])
+        assert results == [], 'RSTStateMachine.run() results should be empty!'
+        self.check_document()
+        self.node = self.memo = None    # remove unneeded references
+
+    def check_document(self):
+        """Check for illegal structure: empty document."""
+        if len(self.document) == 0:
+            error = self.reporter.error(
+                'Document empty; must have contents.', line=0)
+            self.document += error
+
+
+class NestedStateMachine(StateMachineWS):
+
+    """
+    StateMachine run from within other StateMachine runs, to parse nested
+    document structures.
+    """
+
+    def run(self, input_lines, input_offset, memo, node, match_titles=1):
+        """
+        Parse `input_lines` and populate a `docutils.nodes.document` instance.
+
+        Extend `StateMachineWS.run()`: set up document-wide data.
+        """
+        self.match_titles = match_titles
+        self.memo = memo
+        self.document = memo.document
+        self.attach_observer(self.document.note_source)
+        self.reporter = memo.reporter
+        self.node = node
+        results = StateMachineWS.run(self, input_lines, input_offset)
+        assert results == [], ('NestedStateMachine.run() results should be '
+                               'empty!')
+        return results
+
+
+class RSTState(StateWS):
+
+    """
+    reStructuredText State superclass.
+
+    Contains methods used by all State subclasses.
+    """
+
+    nested_sm = NestedStateMachine
+
+    def __init__(self, state_machine, debug=0):
+        self.nested_sm_kwargs = {'state_classes': state_classes,
+                                 'initial_state': 'Body'}
+        StateWS.__init__(self, state_machine, debug)
+
+    def runtime_init(self):
+        StateWS.runtime_init(self)
+        memo = self.state_machine.memo
+        self.memo = memo
+        self.reporter = memo.reporter
+        self.inliner = memo.inliner
+        self.document = memo.document
+        self.parent = self.state_machine.node
+
+    def goto_line(self, abs_line_offset):
+        """
+        Jump to input line `abs_line_offset`, ignoring jumps past the end.
+        """
+        try:
+            self.state_machine.goto_line(abs_line_offset)
+        except EOFError:
+            pass
+
+    def no_match(self, context, transitions):
+        """
+        Override `StateWS.no_match` to generate a system message.
+
+        This code should never be run.
+        """
+        self.reporter.severe(
+            'Internal error: no transition pattern match.  State: "%s"; '
+            'transitions: %s; context: %s; current line: %r.'
+            % (self.__class__.__name__, transitions, context,
+               self.state_machine.line),
+            line=self.state_machine.abs_line_number())
+        return context, None, []
+
+    def bof(self, context):
+        """Called at beginning of file."""
+        return [], []
+
+    def nested_parse(self, block, input_offset, node, match_titles=0,
+                     state_machine_class=None, state_machine_kwargs=None):
+        """
+        Create a new StateMachine rooted at `node` and run it over the input
+        `block`.
+        """
+        if state_machine_class is None:
+            state_machine_class = self.nested_sm
+        if state_machine_kwargs is None:
+            state_machine_kwargs = self.nested_sm_kwargs
+        block_length = len(block)
+        state_machine = state_machine_class(debug=self.debug,
+                                            **state_machine_kwargs)
+        state_machine.run(block, input_offset, memo=self.memo,
+                          node=node, match_titles=match_titles)
+        state_machine.unlink()
+        new_offset = state_machine.abs_line_offset()
+        # Adjustment for block if modified in nested parse:
+        self.state_machine.next_line(len(block) - block_length)
+        return new_offset
+
+    def nested_list_parse(self, block, input_offset, node, initial_state,
+                          blank_finish,
+                          blank_finish_state=None,
+                          extra_settings={},
+                          match_titles=0,
+                          state_machine_class=None,
+                          state_machine_kwargs=None):
+        """
+        Create a new StateMachine rooted at `node` and run it over the input
+        `block`. Also keep track of optional intermdediate blank lines and the
+        required final one.
+        """
+        if state_machine_class is None:
+            state_machine_class = self.nested_sm
+        if state_machine_kwargs is None:
+            state_machine_kwargs = self.nested_sm_kwargs.copy()
+        state_machine_kwargs['initial_state'] = initial_state
+        state_machine = state_machine_class(debug=self.debug,
+                                            **state_machine_kwargs)
+        if blank_finish_state is None:
+            blank_finish_state = initial_state
+        state_machine.states[blank_finish_state].blank_finish = blank_finish
+        for key, value in extra_settings.items():
+            setattr(state_machine.states[initial_state], key, value)
+        state_machine.run(block, input_offset, memo=self.memo,
+                          node=node, match_titles=match_titles)
+        blank_finish = state_machine.states[blank_finish_state].blank_finish
+        state_machine.unlink()
+        return state_machine.abs_line_offset(), blank_finish
+
+    def section(self, title, source, style, lineno, messages):
+        """Check for a valid subsection and create one if it checks out."""
+        if self.check_subsection(source, style, lineno):
+            self.new_subsection(title, lineno, messages)
+
+    def check_subsection(self, source, style, lineno):
+        """
+        Check for a valid subsection header.  Return 1 (true) or None (false).
+
+        When a new section is reached that isn't a subsection of the current
+        section, back up the line count (use ``previous_line(-x)``), then
+        ``raise EOFError``.  The current StateMachine will finish, then the
+        calling StateMachine can re-examine the title.  This will work its way
+        back up the calling chain until the correct section level isreached.
+
+        @@@ Alternative: Evaluate the title, store the title info & level, and
+        back up the chain until that level is reached.  Store in memo? Or
+        return in results?
+
+        :Exception: `EOFError` when a sibling or supersection encountered.
+        """
+        memo = self.memo
+        title_styles = memo.title_styles
+        mylevel = memo.section_level
+        try:                            # check for existing title style
+            level = title_styles.index(style) + 1
+        except ValueError:              # new title style
+            if len(title_styles) == memo.section_level: # new subsection
+                title_styles.append(style)
+                return 1
+            else:                       # not at lowest level
+                self.parent += self.title_inconsistent(source, lineno)
+                return None
+        if level <= mylevel:            # sibling or supersection
+            memo.section_level = level   # bubble up to parent section
+            if len(style) == 2:
+                memo.section_bubble_up_kludge = 1
+            # back up 2 lines for underline title, 3 for overline title
+            self.state_machine.previous_line(len(style) + 1)
+            raise EOFError              # let parent section re-evaluate
+        if level == mylevel + 1:        # immediate subsection
+            return 1
+        else:                           # invalid subsection
+            self.parent += self.title_inconsistent(source, lineno)
+            return None
+
+    def title_inconsistent(self, sourcetext, lineno):
+        error = self.reporter.severe(
+            'Title level inconsistent:', nodes.literal_block('', sourcetext),
+            line=lineno)
+        return error
+
+    def new_subsection(self, title, lineno, messages):
+        """Append new subsection to document tree. On return, check level."""
+        memo = self.memo
+        mylevel = memo.section_level
+        memo.section_level += 1
+        section_node = nodes.section()
+        self.parent += section_node
+        textnodes, title_messages = self.inline_text(title, lineno)
+        titlenode = nodes.title(title, '', *textnodes)
+        name = normalize_name(titlenode.astext())
+        section_node['name'] = name
+        section_node += titlenode
+        section_node += messages
+        section_node += title_messages
+        self.document.note_implicit_target(section_node, section_node)
+        offset = self.state_machine.line_offset + 1
+        absoffset = self.state_machine.abs_line_offset() + 1
+        newabsoffset = self.nested_parse(
+              self.state_machine.input_lines[offset:], input_offset=absoffset,
+              node=section_node, match_titles=1)
+        self.goto_line(newabsoffset)
+        self.check_section(section_node)
+        if memo.section_level <= mylevel: # can't handle next section?
+            raise EOFError              # bubble up to supersection
+        # reset section_level; next pass will detect it properly
+        memo.section_level = mylevel
+
+    def check_section(self, section):
+        """
+        Check for illegal structure: empty section, misplaced transitions.
+        """
+        lineno = section.line
+        if len(section) <= 1:
+            error = self.reporter.error(
+                'Section empty; must have contents.', line=lineno)
+            section += error
+            return
+        if not isinstance(section[0], nodes.title): # shouldn't ever happen
+            error = self.reporter.error(
+                'First element of section must be a title.', line=lineno)
+            section.insert(0, error)
+        if isinstance(section[1], nodes.transition):
+            error = self.reporter.error(
+                'Section may not begin with a transition.',
+                line=section[1].line)
+            section.insert(1, error)
+        if len(section) > 2 and isinstance(section[-1], nodes.transition):
+            error = self.reporter.error(
+                'Section may not end with a transition.',
+                line=section[-1].line)
+            section += error
+
+    def paragraph(self, lines, lineno):
+        """
+        Return a list (paragraph & messages) & a boolean: literal_block next?
+        """
+        data = '\n'.join(lines).rstrip()
+        if data[-2:] == '::':
+            if len(data) == 2:
+                return [], 1
+            elif data[-3] in ' \n':
+                text = data[:-3].rstrip()
+            else:
+                text = data[:-1]
+            literalnext = 1
+        else:
+            text = data
+            literalnext = 0
+        textnodes, messages = self.inline_text(text, lineno)
+        p = nodes.paragraph(data, '', *textnodes)
+        p.line = lineno
+        return [p] + messages, literalnext
+
+    def inline_text(self, text, lineno):
+        """
+        Return 2 lists: nodes (text and inline elements), and system_messages.
+        """
+        return self.inliner.parse(text, lineno, self.memo, self.parent)
+
+    def unindent_warning(self, node_name):
+        return self.reporter.warning(
+            '%s ends without a blank line; unexpected unindent.' % node_name,
+            line=(self.state_machine.abs_line_number() + 1))
+
+
+def build_regexp(definition, compile=1):
+    """
+    Build, compile and return a regular expression based on `definition`.
+
+    :Parameter: `definition`: a 4-tuple (group name, prefix, suffix, parts),
+        where "parts" is a list of regular expressions and/or regular
+        expression definitions to be joined into an or-group.
+    """
+    name, prefix, suffix, parts = definition
+    part_strings = []
+    for part in parts:
+        if type(part) is TupleType:
+            part_strings.append(build_regexp(part, None))
+        else:
+            part_strings.append(part)
+    or_group = '|'.join(part_strings)
+    regexp = '%(prefix)s(?P<%(name)s>%(or_group)s)%(suffix)s' % locals()
+    if compile:
+        return re.compile(regexp, re.UNICODE)
+    else:
+        return regexp
+
+
+class Inliner:
+
+    """
+    Parse inline markup; call the `parse()` method.
+    """
+
+    _interpreted_roles = {
+        # Values of ``None`` mean "not implemented yet":
+        'title-reference': 'generic_interpreted_role',
+        'abbreviation': 'generic_interpreted_role',
+        'acronym': 'generic_interpreted_role',
+        'index': None,
+        'subscript': 'generic_interpreted_role',
+        'superscript': 'generic_interpreted_role',
+        'emphasis': 'generic_interpreted_role',
+        'strong': 'generic_interpreted_role',
+        'literal': 'generic_interpreted_role',
+        'named-reference': None,
+        'anonymous-reference': None,
+        'uri-reference': None,
+        'pep-reference': 'pep_reference_role',
+        'rfc-reference': 'rfc_reference_role',
+        'footnote-reference': None,
+        'citation-reference': None,
+        'substitution-reference': None,
+        'target': None,
+        'restructuredtext-unimplemented-role': None}
+    """Mapping of canonical interpreted text role name to method name.
+    Initializes a name to bound-method mapping in `__init__`."""
+
+    default_interpreted_role = 'title-reference'
+    """The role to use when no explicit role is given.
+    Override in subclasses."""
+
+    generic_roles = {'abbreviation': nodes.abbreviation,
+                     'acronym': nodes.acronym,
+                     'emphasis': nodes.emphasis,
+                     'literal': nodes.literal,
+                     'strong': nodes.strong,
+                     'subscript': nodes.subscript,
+                     'superscript': nodes.superscript,
+                     'title-reference': nodes.title_reference,}
+    """Mapping of canonical interpreted text role name to node class.
+    Used by the `generic_interpreted_role` method for simple, straightforward
+    roles (simple wrapping; no extra processing)."""
+
+    def __init__(self, roles=None):
+        """
+        `roles` is a mapping of canonical role name to role function or bound
+        method, which enables additional interpreted text roles.
+        """
+
+        self.implicit_dispatch = [(self.patterns.uri, self.standalone_uri),]
+        """List of (pattern, bound method) tuples, used by
+        `self.implicit_inline`."""
+
+        self.interpreted_roles = {}
+        """Mapping of canonical role name to role function or bound method.
+        Items removed from this mapping will be disabled."""
+
+        for canonical, method in self._interpreted_roles.items():
+            if method:
+                self.interpreted_roles[canonical] = getattr(self, method)
+            else:
+                self.interpreted_roles[canonical] = None
+        self.interpreted_roles.update(roles or {})
+
+    def init_customizations(self, settings):
+        """Setting-based customizations; run when parsing begins."""
+        if settings.pep_references:
+            self.implicit_dispatch.append((self.patterns.pep,
+                                           self.pep_reference))
+        if settings.rfc_references:
+            self.implicit_dispatch.append((self.patterns.rfc,
+                                           self.rfc_reference))
+
+    def parse(self, text, lineno, memo, parent):
+        """
+        Return 2 lists: nodes (text and inline elements), and system_messages.
+
+        Using `self.patterns.initial`, a pattern which matches start-strings
+        (emphasis, strong, interpreted, phrase reference, literal,
+        substitution reference, and inline target) and complete constructs
+        (simple reference, footnote reference), search for a candidate.  When
+        one is found, check for validity (e.g., not a quoted '*' character).
+        If valid, search for the corresponding end string if applicable, and
+        check it for validity.  If not found or invalid, generate a warning
+        and ignore the start-string.  Implicit inline markup (e.g. standalone
+        URIs) is found last.
+        """
+        self.reporter = memo.reporter
+        self.document = memo.document
+        self.language = memo.language
+        self.parent = parent
+        pattern_search = self.patterns.initial.search
+        dispatch = self.dispatch
+        remaining = escape2null(text)
+        processed = []
+        unprocessed = []
+        messages = []
+        while remaining:
+            match = pattern_search(remaining)
+            if match:
+                groups = match.groupdict()
+                method = dispatch[groups['start'] or groups['backquote']
+                                  or groups['refend'] or groups['fnend']]
+                before, inlines, remaining, sysmessages = method(self, match,
+                                                                 lineno)
+                unprocessed.append(before)
+                messages += sysmessages
+                if inlines:
+                    processed += self.implicit_inline(''.join(unprocessed),
+                                                      lineno)
+                    processed += inlines
+                    unprocessed = []
+            else:
+                break
+        remaining = ''.join(unprocessed) + remaining
+        if remaining:
+            processed += self.implicit_inline(remaining, lineno)
+        return processed, messages
+
+    openers = '\'"([{<'
+    closers = '\'")]}>'
+    start_string_prefix = (r'((?<=^)|(?<=[-/: \n%s]))' % re.escape(openers))
+    end_string_suffix = (r'((?=$)|(?=[-/:.,;!? \n\x00%s]))'
+                         % re.escape(closers))
+    non_whitespace_before = r'(?<![ \n])'
+    non_whitespace_escape_before = r'(?<![ \n\x00])'
+    non_whitespace_after = r'(?![ \n])'
+    # Alphanumerics with isolated internal [-._] chars (i.e. not 2 together):
+    simplename = r'(?:(?!_)\w)+(?:[-._](?:(?!_)\w)+)*'
+    # Valid URI characters (see RFC 2396 & RFC 2732):
+    uric = r"""[-_.!~*'()[\];/:@&=+$,%a-zA-Z0-9]"""
+    # Last URI character; same as uric but no punctuation:
+    urilast = r"""[_~/a-zA-Z0-9]"""
+    emailc = r"""[-_!~*'{|}/#?^`&=+$%a-zA-Z0-9]"""
+    email_pattern = r"""
+          %(emailc)s+(?:\.%(emailc)s+)*   # name
+          @                               # at
+          %(emailc)s+(?:\.%(emailc)s*)*   # host
+          %(urilast)s                     # final URI char
+          """
+    parts = ('initial_inline', start_string_prefix, '',
+             [('start', '', non_whitespace_after,  # simple start-strings
+               [r'\*\*',                # strong
+                r'\*(?!\*)',            # emphasis but not strong
+                r'``',                  # literal
+                r'_`',                  # inline internal target
+                r'\|(?!\|)']            # substitution reference
+               ),
+              ('whole', '', end_string_suffix, # whole constructs
+               [# reference name & end-string
+                r'(?P<refname>%s)(?P<refend>__?)' % simplename,
+                ('footnotelabel', r'\[', r'(?P<fnend>\]_)',
+                 [r'[0-9]+',               # manually numbered
+                  r'\#(%s)?' % simplename, # auto-numbered (w/ label?)
+                  r'\*',                   # auto-symbol
+                  r'(?P<citationlabel>%s)' % simplename] # citation reference
+                 )
+                ]
+               ),
+              ('backquote',             # interpreted text or phrase reference
+               '(?P<role>(:%s:)?)' % simplename, # optional role
+               non_whitespace_after,
+               ['`(?!`)']               # but not literal
+               )
+              ]
+             )
+    patterns = Struct(
+          initial=build_regexp(parts),
+          emphasis=re.compile(non_whitespace_escape_before
+                              + r'(\*)' + end_string_suffix),
+          strong=re.compile(non_whitespace_escape_before
+                            + r'(\*\*)' + end_string_suffix),
+          interpreted_or_phrase_ref=re.compile(
+              r"""
+              %(non_whitespace_escape_before)s
+              (
+                `
+                (?P<suffix>
+                  (?P<role>:%(simplename)s:)?
+                  (?P<refend>__?)?
+                )
+              )
+              %(end_string_suffix)s
+              """ % locals(), re.VERBOSE | re.UNICODE),
+          embedded_uri=re.compile(
+              r"""
+              (
+                [ \n]+                  # spaces or beginning of line
+                <                       # open bracket
+                %(non_whitespace_after)s
+                ([^<>\0]+)              # anything but angle brackets & nulls
+                %(non_whitespace_before)s
+                >                       # close bracket w/o whitespace before
+              )
+              $                         # end of string
+              """ % locals(), re.VERBOSE),
+          literal=re.compile(non_whitespace_before + '(``)'
+                             + end_string_suffix),
+          target=re.compile(non_whitespace_escape_before
+                            + r'(`)' + end_string_suffix),
+          substitution_ref=re.compile(non_whitespace_escape_before
+                                      + r'(\|_{0,2})'
+                                      + end_string_suffix),
+          email=re.compile(email_pattern % locals() + '$', re.VERBOSE),
+          uri=re.compile(
+                (r"""
+                %(start_string_prefix)s
+                (?P<whole>
+                  (?P<absolute>           # absolute URI
+                    (?P<scheme>             # scheme (http, ftp, mailto)
+                      [a-zA-Z][a-zA-Z0-9.+-]*
+                    )
+                    :
+                    (
+                      (                       # either:
+                        (//?)?                  # hierarchical URI
+                        %(uric)s*               # URI characters
+                        %(urilast)s             # final URI char
+                      )
+                      (                       # optional query
+                        \?%(uric)s*
+                        %(urilast)s
+                      )?
+                      (                       # optional fragment
+                        \#%(uric)s*
+                        %(urilast)s
+                      )?
+                    )
+                  )
+                |                       # *OR*
+                  (?P<email>              # email address
+                    """ + email_pattern + r"""
+                  )
+                )
+                %(end_string_suffix)s
+                """) % locals(), re.VERBOSE),
+          pep=re.compile(
+                r"""
+                %(start_string_prefix)s
+                (
+                  (pep-(?P<pepnum1>\d+)(.txt)?) # reference to source file
+                |
+                  (PEP\s+(?P<pepnum2>\d+))      # reference by name
+                )
+                %(end_string_suffix)s""" % locals(), re.VERBOSE),
+          rfc=re.compile(
+                r"""
+                %(start_string_prefix)s
+                (RFC(-|\s+)?(?P<rfcnum>\d+))
+                %(end_string_suffix)s""" % locals(), re.VERBOSE))
+
+    def quoted_start(self, match):
+        """Return 1 if inline markup start-string is 'quoted', 0 if not."""
+        string = match.string
+        start = match.start()
+        end = match.end()
+        if start == 0:                  # start-string at beginning of text
+            return 0
+        prestart = string[start - 1]
+        try:
+            poststart = string[end]
+            if self.openers.index(prestart) \
+                  == self.closers.index(poststart):   # quoted
+                return 1
+        except IndexError:              # start-string at end of text
+            return 1
+        except ValueError:              # not quoted
+            pass
+        return 0
+
+    def inline_obj(self, match, lineno, end_pattern, nodeclass,
+                   restore_backslashes=0):
+        string = match.string
+        matchstart = match.start('start')
+        matchend = match.end('start')
+        if self.quoted_start(match):
+            return (string[:matchend], [], string[matchend:], [], '')
+        endmatch = end_pattern.search(string[matchend:])
+        if endmatch and endmatch.start(1):  # 1 or more chars
+            text = unescape(endmatch.string[:endmatch.start(1)],
+                            restore_backslashes)
+            textend = matchend + endmatch.end(1)
+            rawsource = unescape(string[matchstart:textend], 1)
+            return (string[:matchstart], [nodeclass(rawsource, text)],
+                    string[textend:], [], endmatch.group(1))
+        msg = self.reporter.warning(
+              'Inline %s start-string without end-string.'
+              % nodeclass.__name__, line=lineno)
+        text = unescape(string[matchstart:matchend], 1)
+        rawsource = unescape(string[matchstart:matchend], 1)
+        prb = self.problematic(text, rawsource, msg)
+        return string[:matchstart], [prb], string[matchend:], [msg], ''
+
+    def problematic(self, text, rawsource, message):
+        msgid = self.document.set_id(message, self.parent)
+        problematic = nodes.problematic(rawsource, text, refid=msgid)
+        prbid = self.document.set_id(problematic)
+        message.add_backref(prbid)
+        return problematic
+
+    def emphasis(self, match, lineno):
+        before, inlines, remaining, sysmessages, endstring = self.inline_obj(
+              match, lineno, self.patterns.emphasis, nodes.emphasis)
+        return before, inlines, remaining, sysmessages
+
+    def strong(self, match, lineno):
+        before, inlines, remaining, sysmessages, endstring = self.inline_obj(
+              match, lineno, self.patterns.strong, nodes.strong)
+        return before, inlines, remaining, sysmessages
+
+    def interpreted_or_phrase_ref(self, match, lineno):
+        end_pattern = self.patterns.interpreted_or_phrase_ref
+        string = match.string
+        matchstart = match.start('backquote')
+        matchend = match.end('backquote')
+        rolestart = match.start('role')
+        role = match.group('role')
+        position = ''
+        if role:
+            role = role[1:-1]
+            position = 'prefix'
+        elif self.quoted_start(match):
+            return (string[:matchend], [], string[matchend:], [])
+        endmatch = end_pattern.search(string[matchend:])
+        if endmatch and endmatch.start(1):  # 1 or more chars
+            textend = matchend + endmatch.end()
+            if endmatch.group('role'):
+                if role:
+                    msg = self.reporter.warning(
+                        'Multiple roles in interpreted text (both '
+                        'prefix and suffix present; only one allowed).',
+                        line=lineno)
+                    text = unescape(string[rolestart:textend], 1)
+                    prb = self.problematic(text, text, msg)
+                    return string[:rolestart], [prb], string[textend:], [msg]
+                role = endmatch.group('suffix')[1:-1]
+                position = 'suffix'
+            escaped = endmatch.string[:endmatch.start(1)]
+            text = unescape(escaped, 0)
+            rawsource = unescape(string[matchstart:textend], 1)
+            if rawsource[-1:] == '_':
+                if role:
+                    msg = self.reporter.warning(
+                          'Mismatch: both interpreted text role %s and '
+                          'reference suffix.' % position, line=lineno)
+                    text = unescape(string[rolestart:textend], 1)
+                    prb = self.problematic(text, text, msg)
+                    return string[:rolestart], [prb], string[textend:], [msg]
+                return self.phrase_ref(string[:matchstart], string[textend:],
+                                       rawsource, escaped, text)
+            else:
+                try:
+                    return self.interpreted(
+                        string[:rolestart], string[textend:],
+                        rawsource, text, role, lineno)
+                except UnknownInterpretedRoleError, detail:
+                    msg = self.reporter.error(
+                        'Unknown interpreted text role "%s".' % role,
+                        line=lineno)
+                    text = unescape(string[rolestart:textend], 1)
+                    prb = self.problematic(text, text, msg)
+                    return (string[:rolestart], [prb], string[textend:],
+                            detail.args[0] + [msg])
+                except InterpretedRoleNotImplementedError, detail:
+                    msg = self.reporter.error(
+                        'Interpreted text role "%s" not implemented.' % role,
+                        line=lineno)
+                    text = unescape(string[rolestart:textend], 1)
+                    prb = self.problematic(text, text, msg)
+                    return (string[:rolestart], [prb], string[textend:],
+                            detail.args[0] + [msg])
+        msg = self.reporter.warning(
+              'Inline interpreted text or phrase reference start-string '
+              'without end-string.', line=lineno)
+        text = unescape(string[matchstart:matchend], 1)
+        prb = self.problematic(text, text, msg)
+        return string[:matchstart], [prb], string[matchend:], [msg]
+
+    def phrase_ref(self, before, after, rawsource, escaped, text):
+        match = self.patterns.embedded_uri.search(escaped)
+        if match:
+            text = unescape(escaped[:match.start(0)])
+            uri_text = match.group(2)
+            uri = ''.join(uri_text.split())
+            uri = self.adjust_uri(uri)
+            if uri:
+                target = nodes.target(match.group(1), refuri=uri)
+            else:
+                raise ApplicationError('problem with URI: %r' % uri_text)
+        else:
+            target = None
+        refname = normalize_name(text)
+        reference = nodes.reference(rawsource, text)
+        node_list = [reference]
+        if rawsource[-2:] == '__':
+            if target:
+                reference['refuri'] = uri
+            else:
+                reference['anonymous'] = 1
+                self.document.note_anonymous_ref(reference)
+        else:
+            if target:
+                reference['refuri'] = uri
+                target['name'] = refname
+                self.document.note_external_target(target)
+                self.document.note_explicit_target(target, self.parent)
+                node_list.append(target)
+            else:
+                reference['refname'] = refname
+                self.document.note_refname(reference)
+        return before, node_list, after, []
+
+    def adjust_uri(self, uri):
+        match = self.patterns.email.match(uri)
+        if match:
+            return 'mailto:' + uri
+        else:
+            return uri
+
+    def interpreted(self, before, after, rawsource, text, role, lineno):
+        role_function, canonical, messages = self.get_role_function(role,
+                                                                    lineno)
+        if role_function:
+            nodelist, messages2 = role_function(canonical, rawsource, text,
+                                                lineno)
+            messages.extend(messages2)
+            return before, nodelist, after, messages
+        else:
+            raise InterpretedRoleNotImplementedError(messages)
+
+    def get_role_function(self, role, lineno):
+        messages = []
+        msg_text = []
+        if role:
+            name = role.lower()
+        else:
+            name = self.default_interpreted_role
+        canonical = None
+        try:
+            canonical = self.language.roles[name]
+        except AttributeError, error:
+            msg_text.append('Problem retrieving role entry from language '
+                            'module %r: %s.' % (self.language, error))
+        except KeyError:
+            msg_text.append('No role entry for "%s" in module "%s".'
+                            % (name, self.language.__name__))
+        if not canonical:
+            try:
+                canonical = _fallback_language_module.roles[name]
+                msg_text.append('Using English fallback for role "%s".'
+                                % name)
+            except KeyError:
+                msg_text.append('Trying "%s" as canonical role name.'
+                                % name)
+                # Should be an English name, but just in case:
+                canonical = name
+        if msg_text:
+            message = self.reporter.info('\n'.join(msg_text), line=lineno)
+            messages.append(message)
+        try:
+            return self.interpreted_roles[canonical], canonical, messages
+        except KeyError:
+            raise UnknownInterpretedRoleError(messages)
+
+    def literal(self, match, lineno):
+        before, inlines, remaining, sysmessages, endstring = self.inline_obj(
+              match, lineno, self.patterns.literal, nodes.literal,
+              restore_backslashes=1)
+        return before, inlines, remaining, sysmessages
+
+    def inline_internal_target(self, match, lineno):
+        before, inlines, remaining, sysmessages, endstring = self.inline_obj(
+              match, lineno, self.patterns.target, nodes.target)
+        if inlines and isinstance(inlines[0], nodes.target):
+            assert len(inlines) == 1
+            target = inlines[0]
+            name = normalize_name(target.astext())
+            target['name'] = name
+            self.document.note_explicit_target(target, self.parent)
+        return before, inlines, remaining, sysmessages
+
+    def substitution_reference(self, match, lineno):
+        before, inlines, remaining, sysmessages, endstring = self.inline_obj(
+              match, lineno, self.patterns.substitution_ref,
+              nodes.substitution_reference)
+        if len(inlines) == 1:
+            subref_node = inlines[0]
+            if isinstance(subref_node, nodes.substitution_reference):
+                subref_text = subref_node.astext()
+                self.document.note_substitution_ref(subref_node, subref_text)
+                if endstring[-1:] == '_':
+                    reference_node = nodes.reference(
+                        '|%s%s' % (subref_text, endstring), '')
+                    if endstring[-2:] == '__':
+                        reference_node['anonymous'] = 1
+                        self.document.note_anonymous_ref(
+                              reference_node)
+                    else:
+                        reference_node['refname'] = normalize_name(subref_text)
+                        self.document.note_refname(reference_node)
+                    reference_node += subref_node
+                    inlines = [reference_node]
+        return before, inlines, remaining, sysmessages
+
+    def footnote_reference(self, match, lineno):
+        """
+        Handles `nodes.footnote_reference` and `nodes.citation_reference`
+        elements.
+        """
+        label = match.group('footnotelabel')
+        refname = normalize_name(label)
+        string = match.string
+        before = string[:match.start('whole')]
+        remaining = string[match.end('whole'):]
+        if match.group('citationlabel'):
+            refnode = nodes.citation_reference('[%s]_' % label,
+                                               refname=refname)
+            refnode += nodes.Text(label)
+            self.document.note_citation_ref(refnode)
+        else:
+            refnode = nodes.footnote_reference('[%s]_' % label)
+            if refname[0] == '#':
+                refname = refname[1:]
+                refnode['auto'] = 1
+                self.document.note_autofootnote_ref(refnode)
+            elif refname == '*':
+                refname = ''
+                refnode['auto'] = '*'
+                self.document.note_symbol_footnote_ref(
+                      refnode)
+            else:
+                refnode += nodes.Text(label)
+            if refname:
+                refnode['refname'] = refname
+                self.document.note_footnote_ref(refnode)
+            if self.document.settings.trim_footnote_reference_space:
+                before = before.rstrip()
+        return (before, [refnode], remaining, [])
+
+    def reference(self, match, lineno, anonymous=None):
+        referencename = match.group('refname')
+        refname = normalize_name(referencename)
+        referencenode = nodes.reference(referencename + match.group('refend'),
+                                        referencename)
+        if anonymous:
+            referencenode['anonymous'] = 1
+            self.document.note_anonymous_ref(referencenode)
+        else:
+            referencenode['refname'] = refname
+            self.document.note_refname(referencenode)
+        string = match.string
+        matchstart = match.start('whole')
+        matchend = match.end('whole')
+        return (string[:matchstart], [referencenode], string[matchend:], [])
+
+    def anonymous_reference(self, match, lineno):
+        return self.reference(match, lineno, anonymous=1)
+
+    def standalone_uri(self, match, lineno):
+        if not match.group('scheme') or urischemes.schemes.has_key(
+              match.group('scheme').lower()):
+            if match.group('email'):
+                addscheme = 'mailto:'
+            else:
+                addscheme = ''
+            text = match.group('whole')
+            unescaped = unescape(text, 0)
+            return [nodes.reference(unescape(text, 1), unescaped,
+                                    refuri=addscheme + unescaped)]
+        else:                   # not a valid scheme
+            raise MarkupMismatch
+
+    pep_url_local = 'pep-%04d.html'
+    pep_url_absolute = 'http://www.python.org/peps/pep-%04d.html'
+    pep_url = pep_url_absolute
+
+    def pep_reference(self, match, lineno):
+        text = match.group(0)
+        if text.startswith('pep-'):
+            pepnum = int(match.group('pepnum1'))
+        elif text.startswith('PEP'):
+            pepnum = int(match.group('pepnum2'))
+        else:
+            raise MarkupMismatch
+        ref = self.pep_url % pepnum
+        unescaped = unescape(text, 0)
+        return [nodes.reference(unescape(text, 1), unescaped, refuri=ref)]
+
+    rfc_url = 'http://www.faqs.org/rfcs/rfc%d.html'
+
+    def rfc_reference(self, match, lineno):
+        text = match.group(0)
+        if text.startswith('RFC'):
+            rfcnum = int(match.group('rfcnum'))
+            ref = self.rfc_url % rfcnum
+        else:
+            raise MarkupMismatch
+        unescaped = unescape(text, 0)
+        return [nodes.reference(unescape(text, 1), unescaped, refuri=ref)]
+
+    def implicit_inline(self, text, lineno):
+        """
+        Check each of the patterns in `self.implicit_dispatch` for a match,
+        and dispatch to the stored method for the pattern.  Recursively check
+        the text before and after the match.  Return a list of `nodes.Text`
+        and inline element nodes.
+        """
+        if not text:
+            return []
+        for pattern, method in self.implicit_dispatch:
+            match = pattern.search(text)
+            if match:
+                try:
+                    # Must recurse on strings before *and* after the match;
+                    # there may be multiple patterns.
+                    return (self.implicit_inline(text[:match.start()], lineno)
+                            + method(match, lineno) +
+                            self.implicit_inline(text[match.end():], lineno))
+                except MarkupMismatch:
+                    pass
+        return [nodes.Text(unescape(text), rawsource=unescape(text, 1))]
+
+    dispatch = {'*': emphasis,
+                '**': strong,
+                '`': interpreted_or_phrase_ref,
+                '``': literal,
+                '_`': inline_internal_target,
+                ']_': footnote_reference,
+                '|': substitution_reference,
+                '_': reference,
+                '__': anonymous_reference}
+
+    def generic_interpreted_role(self, role, rawtext, text, lineno):
+        try:
+            role_class = self.generic_roles[role]
+        except KeyError:
+            msg = self.reporter.error('Unknown interpreted text role: "%s".'
+                                      % role, line=lineno)
+            prb = self.problematic(text, text, msg)
+            return [prb], [msg]
+        return [role_class(rawtext, text)], []
+
+    def pep_reference_role(self, role, rawtext, text, lineno):
+        try:
+            pepnum = int(text)
+            if pepnum < 0 or pepnum > 9999:
+                raise ValueError
+        except ValueError:
+            msg = self.reporter.error(
+                'PEP number must be a number from 0 to 9999; "%s" is invalid.'
+                % text, line=lineno)
+            prb = self.problematic(text, text, msg)
+            return [prb], [msg]
+        ref = self.pep_url % pepnum
+        return [nodes.reference(rawtext, 'PEP ' + text, refuri=ref)], []
+
+    def rfc_reference_role(self, role, rawtext, text, lineno):
+        try:
+            rfcnum = int(text)
+            if rfcnum <= 0:
+                raise ValueError
+        except ValueError:
+            msg = self.reporter.error(
+                'RFC number must be a number greater than or equal to 1; '
+                '"%s" is invalid.' % text, line=lineno)
+            prb = self.problematic(text, text, msg)
+            return [prb], [msg]
+        ref = self.rfc_url % rfcnum
+        return [nodes.reference(rawtext, 'RFC ' + text, refuri=ref)], []
+
+
+class Body(RSTState):
+
+    """
+    Generic classifier of the first line of a block.
+    """
+
+    enum = Struct()
+    """Enumerated list parsing information."""
+
+    enum.formatinfo = {
+          'parens': Struct(prefix='(', suffix=')', start=1, end=-1),
+          'rparen': Struct(prefix='', suffix=')', start=0, end=-1),
+          'period': Struct(prefix='', suffix='.', start=0, end=-1)}
+    enum.formats = enum.formatinfo.keys()
+    enum.sequences = ['arabic', 'loweralpha', 'upperalpha',
+                      'lowerroman', 'upperroman'] # ORDERED!
+    enum.sequencepats = {'arabic': '[0-9]+',
+                         'loweralpha': '[a-z]',
+                         'upperalpha': '[A-Z]',
+                         'lowerroman': '[ivxlcdm]+',
+                         'upperroman': '[IVXLCDM]+',}
+    enum.converters = {'arabic': int,
+                       'loweralpha':
+                       lambda s, zero=(ord('a')-1): ord(s) - zero,
+                       'upperalpha':
+                       lambda s, zero=(ord('A')-1): ord(s) - zero,
+                       'lowerroman':
+                       lambda s: roman.fromRoman(s.upper()),
+                       'upperroman': roman.fromRoman}
+
+    enum.sequenceregexps = {}
+    for sequence in enum.sequences:
+        enum.sequenceregexps[sequence] = re.compile(
+              enum.sequencepats[sequence] + '$')
+
+    grid_table_top_pat = re.compile(r'\+-[-+]+-\+ *$')
+    """Matches the top (& bottom) of a full table)."""
+
+    simple_table_top_pat = re.compile('=+( +=+)+ *$')
+    """Matches the top of a simple table."""
+
+    simple_table_border_pat = re.compile('=+[ =]*$')
+    """Matches the bottom & header bottom of a simple table."""
+
+    pats = {}
+    """Fragments of patterns used by transitions."""
+
+    pats['nonalphanum7bit'] = '[!-/:address@hidden'
+    pats['alpha'] = '[a-zA-Z]'
+    pats['alphanum'] = '[a-zA-Z0-9]'
+    pats['alphanumplus'] = '[a-zA-Z0-9_-]'
+    pats['enum'] = ('(%(arabic)s|%(loweralpha)s|%(upperalpha)s|%(lowerroman)s'
+                    '|%(upperroman)s)' % enum.sequencepats)
+    pats['optname'] = '%(alphanum)s%(alphanumplus)s*' % pats
+    # @@@ Loosen up the pattern?  Allow Unicode?
+    pats['optarg'] = '%(alpha)s%(alphanumplus)s*' % pats
+    pats['option'] = r'(--?|\+|/)%(optname)s([ =]%(optarg)s)?' % pats
+
+    for format in enum.formats:
+        pats[format] = '(?P<%s>%s%s%s)' % (
+              format, re.escape(enum.formatinfo[format].prefix),
+              pats['enum'], re.escape(enum.formatinfo[format].suffix))
+
+    patterns = {
+          'bullet': r'[-+*]( +|$)',
+          'enumerator': r'(%(parens)s|%(rparen)s|%(period)s)( +|$)' % pats,
+          'field_marker': r':[^: ]([^:]*[^: ])?:( +|$)',
+          'option_marker': r'%(option)s(, %(option)s)*(  +| ?$)' % pats,
+          'doctest': r'>>>( +|$)',
+          'grid_table_top': grid_table_top_pat,
+          'simple_table_top': simple_table_top_pat,
+          'explicit_markup': r'\.\.( +|$)',
+          'anonymous': r'__( +|$)',
+          'line': r'(%(nonalphanum7bit)s)\1* *$' % pats,
+          'text': r''}
+    initial_transitions = (
+          'bullet',
+          'enumerator',
+          'field_marker',
+          'option_marker',
+          'doctest',
+          'grid_table_top',
+          'simple_table_top',
+          'explicit_markup',
+          'anonymous',
+          'line',
+          'text')
+
+    def indent(self, match, context, next_state):
+        """Block quote."""
+        indented, indent, line_offset, blank_finish = \
+              self.state_machine.get_indented()
+        blockquote, messages = self.block_quote(indented, line_offset)
+        self.parent += blockquote
+        self.parent += messages
+        if not blank_finish:
+            self.parent += self.unindent_warning('Block quote')
+        return context, next_state, []
+
+    def block_quote(self, indented, line_offset):
+        blockquote_lines, attribution_lines, attribution_offset = \
+              self.check_attribution(indented, line_offset)
+        blockquote = nodes.block_quote()
+        self.nested_parse(blockquote_lines, line_offset, blockquote)
+        messages = []
+        if attribution_lines:
+            attribution, messages = self.parse_attribution(attribution_lines,
+                                                           attribution_offset)
+            blockquote += attribution
+        return blockquote, messages
+
+    attribution_pattern = re.compile(r'--(?![-\n]) *(?=[^ \n])')
+
+    def check_attribution(self, indented, line_offset):
+        """
+        Check for an attribution in the last contiguous block of `indented`.
+
+        * First line after last blank line must begin with "--" (etc.).
+        * Every line after that must have consistent indentation.
+
+        Return a 3-tuple: (block quote lines, attribution lines,
+        attribution offset).
+        """
+        blank = None
+        nonblank_seen = None
+        indent = 0
+        for i in range(len(indented) - 1, 0, -1): # don't check first line
+            this_line_blank = not indented[i].strip()
+            if nonblank_seen and this_line_blank:
+                match = self.attribution_pattern.match(indented[i + 1])
+                if match:
+                    blank = i
+                break
+            elif not this_line_blank:
+                nonblank_seen = 1
+        if blank and len(indented) - blank > 2: # multi-line attribution
+            indent = (len(indented[blank + 2])
+                      - len(indented[blank + 2].lstrip()))
+            for j in range(blank + 3, len(indented)):
+                if indent != (len(indented[j])
+                              - len(indented[j].lstrip())): # bad shape
+                    blank = None
+                    break
+        if blank:
+            a_lines = indented[blank + 1:]
+            a_lines.strip_indent(match.end(), end=1)
+            a_lines.strip_indent(indent, start=1)
+            return (indented[:blank], a_lines, line_offset + blank + 1)
+        else:
+            return (indented, None, None)
+
+    def parse_attribution(self, indented, line_offset):
+        text = '\n'.join(indented).rstrip()
+        lineno = self.state_machine.abs_line_number() + line_offset
+        textnodes, messages = self.inline_text(text, lineno)
+        node = nodes.attribution(text, '', *textnodes)
+        node.line = lineno
+        return node, messages
+
+    def bullet(self, match, context, next_state):
+        """Bullet list item."""
+        bulletlist = nodes.bullet_list()
+        self.parent += bulletlist
+        bulletlist['bullet'] = match.string[0]
+        i, blank_finish = self.list_item(match.end())
+        bulletlist += i
+        offset = self.state_machine.line_offset + 1   # next line
+        newline_offset, blank_finish = self.nested_list_parse(
+              self.state_machine.input_lines[offset:],
+              input_offset=self.state_machine.abs_line_offset() + 1,
+              node=bulletlist, initial_state='BulletList',
+              blank_finish=blank_finish)
+        self.goto_line(newline_offset)
+        if not blank_finish:
+            self.parent += self.unindent_warning('Bullet list')
+        return [], next_state, []
+
+    def list_item(self, indent):
+        indented, line_offset, blank_finish = \
+              self.state_machine.get_known_indented(indent)
+        listitem = nodes.list_item('\n'.join(indented))
+        if indented:
+            self.nested_parse(indented, input_offset=line_offset,
+                              node=listitem)
+        return listitem, blank_finish
+
+    def enumerator(self, match, context, next_state):
+        """Enumerated List Item"""
+        format, sequence, text, ordinal = self.parse_enumerator(match)
+        if not self.is_enumerated_list_item(ordinal, sequence, format):
+            raise statemachine.TransitionCorrection('text')
+        if ordinal != 1:
+            msg = self.reporter.info(
+                'Enumerated list start value not ordinal-1: "%s" (ordinal %s)'
+                % (text, ordinal), line=self.state_machine.abs_line_number())
+            self.parent += msg
+        enumlist = nodes.enumerated_list()
+        self.parent += enumlist
+        enumlist['enumtype'] = sequence
+        if ordinal != 1:
+            enumlist['start'] = ordinal
+        enumlist['prefix'] = self.enum.formatinfo[format].prefix
+        enumlist['suffix'] = self.enum.formatinfo[format].suffix
+        listitem, blank_finish = self.list_item(match.end())
+        enumlist += listitem
+        offset = self.state_machine.line_offset + 1   # next line
+        newline_offset, blank_finish = self.nested_list_parse(
+              self.state_machine.input_lines[offset:],
+              input_offset=self.state_machine.abs_line_offset() + 1,
+              node=enumlist, initial_state='EnumeratedList',
+              blank_finish=blank_finish,
+              extra_settings={'lastordinal': ordinal, 'format': format})
+        self.goto_line(newline_offset)
+        if not blank_finish:
+            self.parent += self.unindent_warning('Enumerated list')
+        return [], next_state, []
+
+    def parse_enumerator(self, match, expected_sequence=None):
+        """
+        Analyze an enumerator and return the results.
+
+        :Return:
+            - the enumerator format ('period', 'parens', or 'rparen'),
+            - the sequence used ('arabic', 'loweralpha', 'upperroman', etc.),
+            - the text of the enumerator, stripped of formatting, and
+            - the ordinal value of the enumerator ('a' -> 1, 'ii' -> 2, etc.;
+              ``None`` is returned for invalid enumerator text).
+
+        The enumerator format has already been determined by the regular
+        expression match. If `expected_sequence` is given, that sequence is
+        tried first. If not, we check for Roman numeral 1. This way,
+        single-character Roman numerals (which are also alphabetical) can be
+        matched. If no sequence has been matched, all sequences are checked in
+        order.
+        """
+        groupdict = match.groupdict()
+        sequence = ''
+        for format in self.enum.formats:
+            if groupdict[format]:       # was this the format matched?
+                break                   # yes; keep `format`
+        else:                           # shouldn't happen
+            raise ParserError('enumerator format not matched')
+        text = groupdict[format][self.enum.formatinfo[format].start
+                                 :self.enum.formatinfo[format].end]
+        if expected_sequence:
+            try:
+                if self.enum.sequenceregexps[expected_sequence].match(text):
+                    sequence = expected_sequence
+            except KeyError:            # shouldn't happen
+                raise ParserError('unknown enumerator sequence: %s'
+                                  % sequence)
+        elif text == 'i':
+            sequence = 'lowerroman'
+        elif text == 'I':
+            sequence = 'upperroman'
+        if not sequence:
+            for sequence in self.enum.sequences:
+                if self.enum.sequenceregexps[sequence].match(text):
+                    break
+            else:                       # shouldn't happen
+                raise ParserError('enumerator sequence not matched')
+        try:
+            ordinal = self.enum.converters[sequence](text)
+        except roman.InvalidRomanNumeralError:
+            ordinal = None
+        return format, sequence, text, ordinal
+
+    def is_enumerated_list_item(self, ordinal, sequence, format):
+        """
+        Check validity based on the ordinal value and the second line.
+
+        Return true iff the ordinal is valid and the second line is blank,
+        indented, or starts with the next enumerator.
+        """
+        if ordinal is None:
+            return None
+        try:
+            next_line = self.state_machine.next_line()
+        except EOFError:              # end of input lines
+            self.state_machine.previous_line()
+            return 1
+        else:
+            self.state_machine.previous_line()
+        if not next_line[:1].strip():   # blank or indented
+            return 1
+        next_enumerator = self.make_enumerator(ordinal + 1, sequence, format)
+        try:
+            if next_line.startswith(next_enumerator):
+                return 1
+        except TypeError:
+            pass
+        return None
+
+    def make_enumerator(self, ordinal, sequence, format):
+        """
+        Construct and return an enumerated list item marker.
+
+        Return ``None`` for invalid (out of range) ordinals.
+        """
+        if sequence == 'arabic':
+            enumerator = str(ordinal)
+        else:
+            if sequence.endswith('alpha'):
+                if ordinal > 26:
+                    return None
+                enumerator = chr(ordinal + ord('a') - 1)
+            elif sequence.endswith('roman'):
+                try:
+                    enumerator = roman.toRoman(ordinal)
+                except roman.RomanError:
+                    return None
+            else:                       # shouldn't happen
+                raise ParserError('unknown enumerator sequence: "%s"'
+                                  % sequence)
+            if sequence.startswith('lower'):
+                enumerator = enumerator.lower()
+            elif sequence.startswith('upper'):
+                enumerator = enumerator.upper()
+            else:                       # shouldn't happen
+                raise ParserError('unknown enumerator sequence: "%s"'
+                                  % sequence)
+        formatinfo = self.enum.formatinfo[format]
+        return formatinfo.prefix + enumerator + formatinfo.suffix + ' '
+
+    def field_marker(self, match, context, next_state):
+        """Field list item."""
+        fieldlist = nodes.field_list()
+        self.parent += fieldlist
+        field, blank_finish = self.field(match)
+        fieldlist += field
+        offset = self.state_machine.line_offset + 1   # next line
+        newline_offset, blank_finish = self.nested_list_parse(
+              self.state_machine.input_lines[offset:],
+              input_offset=self.state_machine.abs_line_offset() + 1,
+              node=fieldlist, initial_state='FieldList',
+              blank_finish=blank_finish)
+        self.goto_line(newline_offset)
+        if not blank_finish:
+            self.parent += self.unindent_warning('Field list')
+        return [], next_state, []
+
+    def field(self, match):
+        name = self.parse_field_marker(match)
+        lineno = self.state_machine.abs_line_number()
+        indented, indent, line_offset, blank_finish = \
+              self.state_machine.get_first_known_indented(match.end())
+        fieldnode = nodes.field()
+        fieldnode.line = lineno
+        fieldnode += nodes.field_name(name, name)
+        fieldbody = nodes.field_body('\n'.join(indented))
+        fieldnode += fieldbody
+        if indented:
+            self.parse_field_body(indented, line_offset, fieldbody)
+        return fieldnode, blank_finish
+
+    def parse_field_marker(self, match):
+        """Extract & return field name from a field marker match."""
+        field = match.string[1:]        # strip off leading ':'
+        field = field[:field.find(':')] # strip off trailing ':' etc.
+        return field
+
+    def parse_field_body(self, indented, offset, node):
+        self.nested_parse(indented, input_offset=offset, node=node)
+
+    def option_marker(self, match, context, next_state):
+        """Option list item."""
+        optionlist = nodes.option_list()
+        try:
+            listitem, blank_finish = self.option_list_item(match)
+        except MarkupError, (message, lineno):
+            # This shouldn't happen; pattern won't match.
+            msg = self.reporter.error(
+                'Invalid option list marker: %s' % message, line=lineno)
+            self.parent += msg
+            indented, indent, line_offset, blank_finish = \
+                  self.state_machine.get_first_known_indented(match.end())
+            blockquote, messages = self.block_quote(indented, line_offset)
+            self.parent += blockquote
+            self.parent += messages
+            if not blank_finish:
+                self.parent += self.unindent_warning('Option list')
+            return [], next_state, []
+        self.parent += optionlist
+        optionlist += listitem
+        offset = self.state_machine.line_offset + 1   # next line
+        newline_offset, blank_finish = self.nested_list_parse(
+              self.state_machine.input_lines[offset:],
+              input_offset=self.state_machine.abs_line_offset() + 1,
+              node=optionlist, initial_state='OptionList',
+              blank_finish=blank_finish)
+        self.goto_line(newline_offset)
+        if not blank_finish:
+            self.parent += self.unindent_warning('Option list')
+        return [], next_state, []
+
+    def option_list_item(self, match):
+        offset = self.state_machine.abs_line_offset()
+        options = self.parse_option_marker(match)
+        indented, indent, line_offset, blank_finish = \
+              self.state_machine.get_first_known_indented(match.end())
+        if not indented:                # not an option list item
+            self.goto_line(offset)
+            raise statemachine.TransitionCorrection('text')
+        option_group = nodes.option_group('', *options)
+        description = nodes.description('\n'.join(indented))
+        option_list_item = nodes.option_list_item('', option_group,
+                                                  description)
+        if indented:
+            self.nested_parse(indented, input_offset=line_offset,
+                              node=description)
+        return option_list_item, blank_finish
+
+    def parse_option_marker(self, match):
+        """
+        Return a list of `node.option` and `node.option_argument` objects,
+        parsed from an option marker match.
+
+        :Exception: `MarkupError` for invalid option markers.
+        """
+        optlist = []
+        optionstrings = match.group().rstrip().split(', ')
+        for optionstring in optionstrings:
+            tokens = optionstring.split()
+            delimiter = ' '
+            firstopt = tokens[0].split('=')
+            if len(firstopt) > 1:
+                tokens[:1] = firstopt
+                delimiter = '='
+            if 0 < len(tokens) <= 2:
+                option = nodes.option(optionstring)
+                option += nodes.option_string(tokens[0], tokens[0])
+                if len(tokens) > 1:
+                    option += nodes.option_argument(tokens[1], tokens[1],
+                                                    delimiter=delimiter)
+                optlist.append(option)
+            else:
+                raise MarkupError(
+                    'wrong numer of option tokens (=%s), should be 1 or 2: '
+                    '"%s"' % (len(tokens), optionstring),
+                    self.state_machine.abs_line_number() + 1)
+        return optlist
+
+    def doctest(self, match, context, next_state):
+        data = '\n'.join(self.state_machine.get_text_block())
+        self.parent += nodes.doctest_block(data, data)
+        return [], next_state, []
+
+    def grid_table_top(self, match, context, next_state):
+        """Top border of a full table."""
+        return self.table_top(match, context, next_state,
+                              self.isolate_grid_table,
+                              tableparser.GridTableParser)
+
+    def simple_table_top(self, match, context, next_state):
+        """Top border of a simple table."""
+        return self.table_top(match, context, next_state,
+                              self.isolate_simple_table,
+                              tableparser.SimpleTableParser)
+
+    def table_top(self, match, context, next_state,
+                  isolate_function, parser_class):
+        """Top border of a generic table."""
+        nodelist, blank_finish = self.table(isolate_function, parser_class)
+        self.parent += nodelist
+        if not blank_finish:
+            msg = self.reporter.warning(
+                'Blank line required after table.',
+                line=self.state_machine.abs_line_number() + 1)
+            self.parent += msg
+        return [], next_state, []
+
+    def table(self, isolate_function, parser_class):
+        """Parse a table."""
+        block, messages, blank_finish = isolate_function()
+        if block:
+            try:
+                parser = parser_class()
+                tabledata = parser.parse(block)
+                tableline = (self.state_machine.abs_line_number() - len(block)
+                             + 1)
+                table = self.build_table(tabledata, tableline)
+                nodelist = [table] + messages
+            except tableparser.TableMarkupError, detail:
+                nodelist = self.malformed_table(block, str(detail)) + messages
+        else:
+            nodelist = messages
+        return nodelist, blank_finish
+
+    def isolate_grid_table(self):
+        messages = []
+        blank_finish = 1
+        try:
+            block = self.state_machine.get_text_block(flush_left=1)
+        except statemachine.UnexpectedIndentationError, instance:
+            block, source, lineno = instance.args
+            messages.append(self.reporter.error('Unexpected indentation.',
+                                                source=source, line=lineno))
+            blank_finish = 0
+        block.disconnect()
+        width = len(block[0].strip())
+        for i in range(len(block)):
+            block[i] = block[i].strip()
+            if block[i][0] not in '+|': # check left edge
+                blank_finish = 0
+                self.state_machine.previous_line(len(block) - i)
+                del block[i:]
+                break
+        if not self.grid_table_top_pat.match(block[-1]): # find bottom
+            blank_finish = 0
+            # from second-last to third line of table:
+            for i in range(len(block) - 2, 1, -1):
+                if self.grid_table_top_pat.match(block[i]):
+                    self.state_machine.previous_line(len(block) - i + 1)
+                    del block[i+1:]
+                    break
+            else:
+                messages.extend(self.malformed_table(block))
+                return [], messages, blank_finish
+        for i in range(len(block)):     # check right edge
+            if len(block[i]) != width or block[i][-1] not in '+|':
+                messages.extend(self.malformed_table(block))
+                return [], messages, blank_finish
+        return block, messages, blank_finish
+
+    def isolate_simple_table(self):
+        start = self.state_machine.line_offset
+        lines = self.state_machine.input_lines
+        limit = len(lines) - 1
+        toplen = len(lines[start].strip())
+        pattern_match = self.simple_table_border_pat.match
+        found = 0
+        found_at = None
+        i = start + 1
+        while i <= limit:
+            line = lines[i]
+            match = pattern_match(line)
+            if match:
+                if len(line.strip()) != toplen:
+                    self.state_machine.next_line(i - start)
+                    messages = self.malformed_table(
+                        lines[start:i+1], 'Bottom/header table border does '
+                        'not match top border.')
+                    return [], messages, i == limit or not lines[i+1].strip()
+                found += 1
+                found_at = i
+                if found == 2 or i == limit or not lines[i+1].strip():
+                    end = i
+                    break
+            i += 1
+        else:                           # reached end of input_lines
+            if found:
+                extra = ' or no blank line after table bottom'
+                self.state_machine.next_line(found_at - start)
+                block = lines[start:found_at+1]
+            else:
+                extra = ''
+                self.state_machine.next_line(i - start - 1)
+                block = lines[start:]
+            messages = self.malformed_table(
+                block, 'No bottom table border found%s.' % extra)
+            return [], messages, not extra
+        self.state_machine.next_line(end - start)
+        block = lines[start:end+1]
+        return block, [], end == limit or not lines[end+1].strip()
+
+    def malformed_table(self, block, detail=''):
+        data = '\n'.join(block)
+        message = 'Malformed table.'
+        lineno = self.state_machine.abs_line_number() - len(block) + 1
+        if detail:
+            message += '\n' + detail
+        error = self.reporter.error(message, nodes.literal_block(data, data),
+                                    line=lineno)
+        return [error]
+
+    def build_table(self, tabledata, tableline):
+        colspecs, headrows, bodyrows = tabledata
+        table = nodes.table()
+        tgroup = nodes.tgroup(cols=len(colspecs))
+        table += tgroup
+        for colspec in colspecs:
+            tgroup += nodes.colspec(colwidth=colspec)
+        if headrows:
+            thead = nodes.thead()
+            tgroup += thead
+            for row in headrows:
+                thead += self.build_table_row(row, tableline)
+        tbody = nodes.tbody()
+        tgroup += tbody
+        for row in bodyrows:
+            tbody += self.build_table_row(row, tableline)
+        return table
+
+    def build_table_row(self, rowdata, tableline):
+        row = nodes.row()
+        for cell in rowdata:
+            if cell is None:
+                continue
+            morerows, morecols, offset, cellblock = cell
+            attributes = {}
+            if morerows:
+                attributes['morerows'] = morerows
+            if morecols:
+                attributes['morecols'] = morecols
+            entry = nodes.entry(**attributes)
+            row += entry
+            if ''.join(cellblock):
+                self.nested_parse(cellblock, input_offset=tableline+offset,
+                                  node=entry)
+        return row
+
+
+    explicit = Struct()
+    """Patterns and constants used for explicit markup recognition."""
+
+    explicit.patterns = Struct(
+          target=re.compile(r"""
+                            (
+                              _               # anonymous target
+                            |               # *OR*
+                              (?P<quote>`?)   # optional open quote
+                              (?![ `])        # first char. not space or
+                                              # backquote
+                              (?P<name>       # reference name
+                                .+?
+                              )
+                              %(non_whitespace_escape_before)s
+                              (?P=quote)      # close quote if open quote used
+                            )
+                            %(non_whitespace_escape_before)s
+                            [ ]?            # optional space
+                            :               # end of reference name
+                            ([ ]+|$)        # followed by whitespace
+                            """ % vars(Inliner), re.VERBOSE),
+          reference=re.compile(r"""
+                               (
+                                 (?P<simple>%(simplename)s)_
+                               |                  # *OR*
+                                 `                  # open backquote
+                                 (?![ ])            # not space
+                                 (?P<phrase>.+?)    # hyperlink phrase
+                                 %(non_whitespace_escape_before)s
+                                 `_                 # close backquote,
+                                                    # reference mark
+                               )
+                               $                  # end of string
+                               """ % vars(Inliner), re.VERBOSE | re.UNICODE),
+          substitution=re.compile(r"""
+                                  (
+                                    (?![ ])          # first char. not space
+                                    (?P<name>.+?)    # substitution text
+                                    %(non_whitespace_escape_before)s
+                                    \|               # close delimiter
+                                  )
+                                  ([ ]+|$)           # followed by whitespace
+                                  """ % vars(Inliner), re.VERBOSE),)
+
+    def footnote(self, match):
+        lineno = self.state_machine.abs_line_number()
+        indented, indent, offset, blank_finish = \
+              self.state_machine.get_first_known_indented(match.end())
+        label = match.group(1)
+        name = normalize_name(label)
+        footnote = nodes.footnote('\n'.join(indented))
+        footnote.line = lineno
+        if name[0] == '#':              # auto-numbered
+            name = name[1:]             # autonumber label
+            footnote['auto'] = 1
+            if name:
+                footnote['name'] = name
+            self.document.note_autofootnote(footnote)
+        elif name == '*':               # auto-symbol
+            name = ''
+            footnote['auto'] = '*'
+            self.document.note_symbol_footnote(footnote)
+        else:                           # manually numbered
+            footnote += nodes.label('', label)
+            footnote['name'] = name
+            self.document.note_footnote(footnote)
+        if name:
+            self.document.note_explicit_target(footnote, footnote)
+        else:
+            self.document.set_id(footnote, footnote)
+        if indented:
+            self.nested_parse(indented, input_offset=offset, node=footnote)
+        return [footnote], blank_finish
+
+    def citation(self, match):
+        lineno = self.state_machine.abs_line_number()
+        indented, indent, offset, blank_finish = \
+              self.state_machine.get_first_known_indented(match.end())
+        label = match.group(1)
+        name = normalize_name(label)
+        citation = nodes.citation('\n'.join(indented))
+        citation.line = lineno
+        citation += nodes.label('', label)
+        citation['name'] = name
+        self.document.note_citation(citation)
+        self.document.note_explicit_target(citation, citation)
+        if indented:
+            self.nested_parse(indented, input_offset=offset, node=citation)
+        return [citation], blank_finish
+
+    def hyperlink_target(self, match):
+        pattern = self.explicit.patterns.target
+        lineno = self.state_machine.abs_line_number()
+        block, indent, offset, blank_finish = \
+              self.state_machine.get_first_known_indented(
+              match.end(), until_blank=1, strip_indent=0)
+        blocktext = match.string[:match.end()] + '\n'.join(block)
+        block = [escape2null(line) for line in block]
+        escaped = block[0]
+        blockindex = 0
+        while 1:
+            targetmatch = pattern.match(escaped)
+            if targetmatch:
+                break
+            blockindex += 1
+            try:
+                escaped += block[blockindex]
+            except IndexError:
+                raise MarkupError('malformed hyperlink target.', lineno)
+        del block[:blockindex]
+        block[0] = (block[0] + ' ')[targetmatch.end()-len(escaped)-1:].strip()
+        if block and block[-1].strip()[-1:] == '_': # possible indirect target
+            reference = ' '.join([line.strip() for line in block])
+            refname = self.is_reference(reference)
+            if refname:
+                target = nodes.target(blocktext, '', refname=refname)
+                target.line = lineno
+                self.add_target(targetmatch.group('name'), '', target)
+                self.document.note_indirect_target(target)
+                return [target], blank_finish
+        nodelist = []
+        reference = ''.join([line.strip() for line in block])
+        if reference.find(' ') != -1:
+            warning = self.reporter.warning(
+                  'Hyperlink target contains whitespace. Perhaps a footnote '
+                  'was intended?',
+                  nodes.literal_block(blocktext, blocktext), line=lineno)
+            nodelist.append(warning)
+        else:
+            unescaped = unescape(reference)
+            target = nodes.target(blocktext, '')
+            target.line = lineno
+            self.add_target(targetmatch.group('name'), unescaped, target)
+            nodelist.append(target)
+        return nodelist, blank_finish
+
+    def is_reference(self, reference):
+        match = self.explicit.patterns.reference.match(
+            normalize_name(reference))
+        if not match:
+            return None
+        return unescape(match.group('simple') or match.group('phrase'))
+
+    def add_target(self, targetname, refuri, target):
+        if targetname:
+            name = normalize_name(unescape(targetname))
+            target['name'] = name
+            if refuri:
+                uri = self.inliner.adjust_uri(refuri)
+                if uri:
+                    target['refuri'] = uri
+                    self.document.note_external_target(target)
+                else:
+                    raise ApplicationError('problem with URI: %r' % refuri)
+            else:
+                self.document.note_internal_target(target)
+            self.document.note_explicit_target(target, self.parent)
+        else:                       # anonymous target
+            if refuri:
+                target['refuri'] = refuri
+            target['anonymous'] = 1
+            self.document.note_anonymous_target(target)
+
+    def substitution_def(self, match):
+        pattern = self.explicit.patterns.substitution
+        lineno = self.state_machine.abs_line_number()
+        block, indent, offset, blank_finish = \
+              self.state_machine.get_first_known_indented(match.end(),
+                                                          strip_indent=0)
+        blocktext = (match.string[:match.end()] + '\n'.join(block))
+        block.disconnect()
+        for i in range(len(block)):
+            block[i] = escape2null(block[i])
+        escaped = block[0].rstrip()
+        blockindex = 0
+        while 1:
+            subdefmatch = pattern.match(escaped)
+            if subdefmatch:
+                break
+            blockindex += 1
+            try:
+                escaped = escaped + ' ' + block[blockindex].strip()
+            except IndexError:
+                raise MarkupError('malformed substitution definition.',
+                                  lineno)
+        del block[:blockindex]          # strip out the substitution marker
+        block[0] = (block[0] + ' ')[subdefmatch.end()-len(escaped)-1:].strip()
+        if not block[0]:
+            del block[0]
+            offset += 1
+        while block and not block[-1].strip():
+            block.pop()
+        subname = subdefmatch.group('name')
+        substitution_node = nodes.substitution_definition(blocktext)
+        substitution_node.line = lineno
+        self.document.note_substitution_def(
+            substitution_node,subname, self.parent)
+        if block:
+            block[0] = block[0].strip()
+            new_abs_offset, blank_finish = self.nested_list_parse(
+                  block, input_offset=offset, node=substitution_node,
+                  initial_state='SubstitutionDef', blank_finish=blank_finish)
+            i = 0
+            for node in substitution_node[:]:
+                if not (isinstance(node, nodes.Inline) or
+                        isinstance(node, nodes.Text)):
+                    self.parent += substitution_node[i]
+                    del substitution_node[i]
+                else:
+                    i += 1
+            if len(substitution_node) == 0:
+                msg = self.reporter.warning(
+                      'Substitution definition "%s" empty or invalid.'
+                      % subname,
+                      nodes.literal_block(blocktext, blocktext), line=lineno)
+                return [msg], blank_finish
+            else:
+                return [substitution_node], blank_finish
+        else:
+            msg = self.reporter.warning(
+                  'Substitution definition "%s" missing contents.' % subname,
+                  nodes.literal_block(blocktext, blocktext), line=lineno)
+            return [msg], blank_finish
+
+    def directive(self, match, **option_presets):
+        type_name = match.group(1)
+        directive_function, messages = directives.directive(
+            type_name, self.memo.language, self.document)
+        self.parent += messages
+        if directive_function:
+            return self.parse_directive(
+                directive_function, match, type_name, option_presets)
+        else:
+            return self.unknown_directive(type_name)
+
+    def parse_directive(self, directive_fn, match, type_name, option_presets):
+        """
+        Parse a directive then run its directive function.
+
+        Parameters:
+
+        - `directive_fn`: The function implementing the directive.  Uses
+          function attributes ``arguments``, ``options``, and/or ``content``
+          if present.
+
+        - `match`: A regular expression match object which matched the first
+          line of the directive.
+
+        - `type_name`: The directive name, as used in the source text.
+
+        - `option_presets`: A dictionary of preset options, defaults for the
+          directive options.  Currently, only an "alt" option is passed by
+          substitution definitions (value: the substitution name), which may
+          be used by an embedded image directive.
+
+        Returns a 2-tuple: list of nodes, and a "blank finish" boolean.
+        """
+        arguments = []
+        options = {}
+        argument_spec = getattr(directive_fn, 'arguments', None)
+        if argument_spec and argument_spec[:2] == (0, 0):
+            argument_spec = None
+        option_spec = getattr(directive_fn, 'options', None)
+        content_spec = getattr(directive_fn, 'content', None)
+        lineno = self.state_machine.abs_line_number()
+        initial_line_offset = self.state_machine.line_offset
+        indented, indent, line_offset, blank_finish \
+                  = self.state_machine.get_first_known_indented(match.end(),
+                                                                strip_top=0)
+        block_text = '\n'.join(self.state_machine.input_lines[
+            initial_line_offset : self.state_machine.line_offset + 1])
+        if indented and not indented[0].strip():
+            indented.trim_start()
+            line_offset += 1
+        while indented and not indented[-1].strip():
+            indented.trim_end()
+        if indented and (argument_spec or option_spec):
+            for i in range(len(indented)):
+                if not indented[i].strip():
+                    break
+            else:
+                i += 1
+            arg_block = indented[:i]
+            content = indented[i+1:]
+            content_offset = line_offset + i + 1
+        else:
+            content = indented
+            content_offset = line_offset
+            arg_block = []
+        while content and not content[0].strip():
+            content.trim_start()
+            content_offset += 1
+        try:
+            if option_spec:
+                options, arg_block = self.parse_directive_options(
+                    option_presets, option_spec, arg_block)
+            if argument_spec:
+                arguments = self.parse_directive_arguments(argument_spec,
+                                                           arg_block)
+            if content and not content_spec:
+                raise MarkupError('no content permitted')
+        except MarkupError, detail:
+            error = self.reporter.error(
+                'Error in "%s" directive:\n%s.' % (type_name, detail),
+                nodes.literal_block(block_text, block_text), line=lineno)
+            return [error], blank_finish
+        result = directive_fn(
+            type_name, arguments, options, content, lineno, content_offset,
+            block_text, self, self.state_machine)
+        return result, blank_finish or self.state_machine.is_next_line_blank()
+
+    def parse_directive_options(self, option_presets, option_spec, arg_block):
+        options = option_presets.copy()
+        for i in range(len(arg_block)):
+            if arg_block[i][:1] == ':':
+                opt_block = arg_block[i:]
+                arg_block = arg_block[:i]
+                break
+        else:
+            opt_block = []
+        if opt_block:
+            success, data = self.parse_extension_options(option_spec,
+                                                         opt_block)
+            if success:                 # data is a dict of options
+                options.update(data)
+            else:                       # data is an error string
+                raise MarkupError(data)
+        return options, arg_block
+
+    def parse_directive_arguments(self, argument_spec, arg_block):
+        required, optional, last_whitespace = argument_spec
+        arg_text = '\n'.join(arg_block)
+        arguments = arg_text.split()
+        if len(arguments) < required:
+            raise MarkupError('%s argument(s) required, %s supplied'
+                              % (required, len(arguments)))
+        elif len(arguments) > required + optional:
+            if last_whitespace:
+                arguments = arg_text.split(None, required + optional - 1)
+            else:
+                raise MarkupError(
+                    'maximum %s argument(s) allowed, %s supplied'
+                    % (required + optional, len(arguments)))
+        return arguments
+
+    def parse_extension_options(self, option_spec, datalines):
+        """
+        Parse `datalines` for a field list containing extension options
+        matching `option_spec`.
+
+        :Parameters:
+            - `option_spec`: a mapping of option name to conversion
+              function, which should raise an exception on bad input.
+            - `datalines`: a list of input strings.
+
+        :Return:
+            - Success value, 1 or 0.
+            - An option dictionary on success, an error string on failure.
+        """
+        node = nodes.field_list()
+        newline_offset, blank_finish = self.nested_list_parse(
+              datalines, 0, node, initial_state='ExtensionOptions',
+              blank_finish=1)
+        if newline_offset != len(datalines): # incomplete parse of block
+            return 0, 'invalid option block'
+        try:
+            options = utils.extract_extension_options(node, option_spec)
+        except KeyError, detail:
+            return 0, ('unknown option: "%s"' % detail.args[0])
+        except (ValueError, TypeError), detail:
+            return 0, ('invalid option value: %s' % detail)
+        except utils.ExtensionOptionError, detail:
+            return 0, ('invalid option data: %s' % detail)
+        if blank_finish:
+            return 1, options
+        else:
+            return 0, 'option data incompletely parsed'
+
+    def unknown_directive(self, type_name):
+        lineno = self.state_machine.abs_line_number()
+        indented, indent, offset, blank_finish = \
+              self.state_machine.get_first_known_indented(0, strip_indent=0)
+        text = '\n'.join(indented)
+        error = self.reporter.error(
+              'Unknown directive type "%s".' % type_name,
+              nodes.literal_block(text, text), line=lineno)
+        return [error], blank_finish
+
+    def comment(self, match):
+        if not match.string[match.end():].strip() \
+              and self.state_machine.is_next_line_blank(): # an empty comment?
+            return [nodes.comment()], 1 # "A tiny but practical wart."
+        indented, indent, offset, blank_finish = \
+              self.state_machine.get_first_known_indented(match.end())
+        while indented and not indented[-1].strip():
+            indented.trim_end()
+        text = '\n'.join(indented)
+        return [nodes.comment(text, text)], blank_finish
+
+    explicit.constructs = [
+          (footnote,
+           re.compile(r"""
+                      \.\.[ ]+          # explicit markup start
+                      \[
+                      (                 # footnote label:
+                          [0-9]+          # manually numbered footnote
+                        |               # *OR*
+                          \#              # anonymous auto-numbered footnote
+                        |               # *OR*
+                          \#%s            # auto-number ed?) footnote label
+                        |               # *OR*
+                          \*              # auto-symbol footnote
+                      )
+                      \]
+                      ([ ]+|$)          # whitespace or end of line
+                      """ % Inliner.simplename, re.VERBOSE | re.UNICODE)),
+          (citation,
+           re.compile(r"""
+                      \.\.[ ]+          # explicit markup start
+                      \[(%s)\]          # citation label
+                      ([ ]+|$)          # whitespace or end of line
+                      """ % Inliner.simplename, re.VERBOSE | re.UNICODE)),
+          (hyperlink_target,
+           re.compile(r"""
+                      \.\.[ ]+          # explicit markup start
+                      _                 # target indicator
+                      (?![ ])           # first char. not space
+                      """, re.VERBOSE)),
+          (substitution_def,
+           re.compile(r"""
+                      \.\.[ ]+          # explicit markup start
+                      \|                # substitution indicator
+                      (?![ ])           # first char. not space
+                      """, re.VERBOSE)),
+          (directive,
+           re.compile(r"""
+                      \.\.[ ]+          # explicit markup start
+                      (%s)              # directive name
+                      [ ]?              # optional space
+                      ::                # directive delimiter
+                      ([ ]+|$)          # whitespace or end of line
+                      """ % Inliner.simplename, re.VERBOSE | re.UNICODE))]
+
+    def explicit_markup(self, match, context, next_state):
+        """Footnotes, hyperlink targets, directives, comments."""
+        nodelist, blank_finish = self.explicit_construct(match)
+        self.parent += nodelist
+        self.explicit_list(blank_finish)
+        return [], next_state, []
+
+    def explicit_construct(self, match):
+        """Determine which explicit construct this is, parse & return it."""
+        errors = []
+        for method, pattern in self.explicit.constructs:
+            expmatch = pattern.match(match.string)
+            if expmatch:
+                try:
+                    return method(self, expmatch)
+                except MarkupError, (message, lineno): # never reached?
+                    errors.append(self.reporter.warning(message, line=lineno))
+                    break
+        nodelist, blank_finish = self.comment(match)
+        return nodelist + errors, blank_finish
+
+    def explicit_list(self, blank_finish):
+        """
+        Create a nested state machine for a series of explicit markup
+        constructs (including anonymous hyperlink targets).
+        """
+        offset = self.state_machine.line_offset + 1   # next line
+        newline_offset, blank_finish = self.nested_list_parse(
+              self.state_machine.input_lines[offset:],
+              input_offset=self.state_machine.abs_line_offset() + 1,
+              node=self.parent, initial_state='Explicit',
+              blank_finish=blank_finish,
+              match_titles=self.state_machine.match_titles)
+        self.goto_line(newline_offset)
+        if not blank_finish:
+            self.parent += self.unindent_warning('Explicit markup')
+
+    def anonymous(self, match, context, next_state):
+        """Anonymous hyperlink targets."""
+        nodelist, blank_finish = self.anonymous_target(match)
+        self.parent += nodelist
+        self.explicit_list(blank_finish)
+        return [], next_state, []
+
+    def anonymous_target(self, match):
+        block, indent, offset, blank_finish \
+              = self.state_machine.get_first_known_indented(match.end(),
+                                                            until_blank=1)
+        blocktext = match.string[:match.end()] + '\n'.join(block)
+        if block and block[-1].strip()[-1:] == '_': # possible indirect target
+            reference = escape2null(' '.join([line.strip()
+                                              for line in block]))
+            refname = self.is_reference(reference)
+            if refname:
+                target = nodes.target(blocktext, '', refname=refname,
+                                      anonymous=1)
+                self.document.note_anonymous_target(target)
+                self.document.note_indirect_target(target)
+                return [target], blank_finish
+        nodelist = []
+        reference = escape2null(''.join([line.strip() for line in block]))
+        if reference.find(' ') != -1:
+            lineno = self.state_machine.abs_line_number() - len(block) + 1
+            warning = self.reporter.warning(
+                  'Anonymous hyperlink target contains whitespace. Perhaps a '
+                  'footnote was intended?',
+                  nodes.literal_block(blocktext, blocktext),
+                  line=lineno)
+            nodelist.append(warning)
+        else:
+            target = nodes.target(blocktext, '', anonymous=1)
+            if reference:
+                unescaped = unescape(reference)
+                target['refuri'] = unescaped
+            self.document.note_anonymous_target(target)
+            nodelist.append(target)
+        return nodelist, blank_finish
+
+    def line(self, match, context, next_state):
+        """Section title overline or transition marker."""
+        if self.state_machine.match_titles:
+            return [match.string], 'Line', []
+        elif match.string.strip() == '::':
+            raise statemachine.TransitionCorrection('text')
+        elif len(match.string.strip()) < 4:
+            msg = self.reporter.info(
+                'Unexpected possible title overline or transition.\n'
+                "Treating it as ordinary text because it's so short.",
+                line=self.state_machine.abs_line_number())
+            self.parent += msg
+            raise statemachine.TransitionCorrection('text')
+        else:
+            blocktext = self.state_machine.line
+            msg = self.reporter.severe(
+                  'Unexpected section title or transition.',
+                  nodes.literal_block(blocktext, blocktext),
+                  line=self.state_machine.abs_line_number())
+            self.parent += msg
+            return [], next_state, []
+
+    def text(self, match, context, next_state):
+        """Titles, definition lists, paragraphs."""
+        return [match.string], 'Text', []
+
+
+class RFC2822Body(Body):
+
+    """
+    RFC2822 headers are only valid as the first constructs in documents.  As
+    soon as anything else appears, the `Body` state should take over.
+    """
+
+    patterns = Body.patterns.copy()     # can't modify the original
+    patterns['rfc2822'] = r'[!-9;-~]+:( +|$)'
+    initial_transitions = [(name, 'Body')
+                           for name in Body.initial_transitions]
+    initial_transitions.insert(-1, ('rfc2822', 'Body')) # just before 'text'
+
+    def rfc2822(self, match, context, next_state):
+        """RFC2822-style field list item."""
+        fieldlist = nodes.field_list(CLASS='rfc2822')
+        self.parent += fieldlist
+        field, blank_finish = self.rfc2822_field(match)
+        fieldlist += field
+        offset = self.state_machine.line_offset + 1   # next line
+        newline_offset, blank_finish = self.nested_list_parse(
+              self.state_machine.input_lines[offset:],
+              input_offset=self.state_machine.abs_line_offset() + 1,
+              node=fieldlist, initial_state='RFC2822List',
+              blank_finish=blank_finish)
+        self.goto_line(newline_offset)
+        if not blank_finish:
+            self.parent += self.unindent_warning(
+                  'RFC2822-style field list')
+        return [], next_state, []
+
+    def rfc2822_field(self, match):
+        name = match.string[:match.string.find(':')]
+        indented, indent, line_offset, blank_finish = \
+              self.state_machine.get_first_known_indented(match.end(),
+                                                          until_blank=1)
+        fieldnode = nodes.field()
+        fieldnode += nodes.field_name(name, name)
+        fieldbody = nodes.field_body('\n'.join(indented))
+        fieldnode += fieldbody
+        if indented:
+            self.nested_parse(indented, input_offset=line_offset,
+                              node=fieldbody)
+        return fieldnode, blank_finish
+
+
+class SpecializedBody(Body):
+
+    """
+    Superclass for second and subsequent compound element members.  Compound
+    elements are lists and list-like constructs.
+
+    All transition methods are disabled (redefined as `invalid_input`).
+    Override individual methods in subclasses to re-enable.
+
+    For example, once an initial bullet list item, say, is recognized, the
+    `BulletList` subclass takes over, with a "bullet_list" node as its
+    container.  Upon encountering the initial bullet list item, `Body.bullet`
+    calls its ``self.nested_list_parse`` (`RSTState.nested_list_parse`), which
+    starts up a nested parsing session with `BulletList` as the initial state.
+    Only the ``bullet`` transition method is enabled in `BulletList`; as long
+    as only bullet list items are encountered, they are parsed and inserted
+    into the container.  The first construct which is *not* a bullet list item
+    triggers the `invalid_input` method, which ends the nested parse and
+    closes the container.  `BulletList` needs to recognize input that is
+    invalid in the context of a bullet list, which means everything *other
+    than* bullet list items, so it inherits the transition list created in
+    `Body`.
+    """
+
+    def invalid_input(self, match=None, context=None, next_state=None):
+        """Not a compound element member. Abort this state machine."""
+        self.state_machine.previous_line() # back up so parent SM can reassess
+        raise EOFError
+
+    indent = invalid_input
+    bullet = invalid_input
+    enumerator = invalid_input
+    field_marker = invalid_input
+    option_marker = invalid_input
+    doctest = invalid_input
+    grid_table_top = invalid_input
+    simple_table_top = invalid_input
+    explicit_markup = invalid_input
+    anonymous = invalid_input
+    line = invalid_input
+    text = invalid_input
+
+
+class BulletList(SpecializedBody):
+
+    """Second and subsequent bullet_list list_items."""
+
+    def bullet(self, match, context, next_state):
+        """Bullet list item."""
+        if match.string[0] != self.parent['bullet']:
+            # different bullet: new list
+            self.invalid_input()
+        listitem, blank_finish = self.list_item(match.end())
+        self.parent += listitem
+        self.blank_finish = blank_finish
+        return [], next_state, []
+
+
+class DefinitionList(SpecializedBody):
+
+    """Second and subsequent definition_list_items."""
+
+    def text(self, match, context, next_state):
+        """Definition lists."""
+        return [match.string], 'Definition', []
+
+
+class EnumeratedList(SpecializedBody):
+
+    """Second and subsequent enumerated_list list_items."""
+
+    def enumerator(self, match, context, next_state):
+        """Enumerated list item."""
+        format, sequence, text, ordinal = self.parse_enumerator(
+              match, self.parent['enumtype'])
+        if (sequence != self.parent['enumtype'] or
+            format != self.format or
+            ordinal != (self.lastordinal + 1) or
+            not self.is_enumerated_list_item(ordinal, sequence, format)):
+            # different enumeration: new list
+            self.invalid_input()
+        listitem, blank_finish = self.list_item(match.end())
+        self.parent += listitem
+        self.blank_finish = blank_finish
+        self.lastordinal = ordinal
+        return [], next_state, []
+
+
+class FieldList(SpecializedBody):
+
+    """Second and subsequent field_list fields."""
+
+    def field_marker(self, match, context, next_state):
+        """Field list field."""
+        field, blank_finish = self.field(match)
+        self.parent += field
+        self.blank_finish = blank_finish
+        return [], next_state, []
+
+
+class OptionList(SpecializedBody):
+
+    """Second and subsequent option_list option_list_items."""
+
+    def option_marker(self, match, context, next_state):
+        """Option list item."""
+        try:
+            option_list_item, blank_finish = self.option_list_item(match)
+        except MarkupError, (message, lineno):
+            self.invalid_input()
+        self.parent += option_list_item
+        self.blank_finish = blank_finish
+        return [], next_state, []
+
+
+class RFC2822List(SpecializedBody, RFC2822Body):
+
+    """Second and subsequent RFC2822-style field_list fields."""
+
+    patterns = RFC2822Body.patterns
+    initial_transitions = RFC2822Body.initial_transitions
+
+    def rfc2822(self, match, context, next_state):
+        """RFC2822-style field list item."""
+        field, blank_finish = self.rfc2822_field(match)
+        self.parent += field
+        self.blank_finish = blank_finish
+        return [], 'RFC2822List', []
+
+    blank = SpecializedBody.invalid_input
+
+
+class ExtensionOptions(FieldList):
+
+    """
+    Parse field_list fields for extension options.
+
+    No nested parsing is done (including inline markup parsing).
+    """
+
+    def parse_field_body(self, indented, offset, node):
+        """Override `Body.parse_field_body` for simpler parsing."""
+        lines = []
+        for line in list(indented) + ['']:
+            if line.strip():
+                lines.append(line)
+            elif lines:
+                text = '\n'.join(lines)
+                node += nodes.paragraph(text, text)
+                lines = []
+
+
+class Explicit(SpecializedBody):
+
+    """Second and subsequent explicit markup construct."""
+
+    def explicit_markup(self, match, context, next_state):
+        """Footnotes, hyperlink targets, directives, comments."""
+        nodelist, blank_finish = self.explicit_construct(match)
+        self.parent += nodelist
+        self.blank_finish = blank_finish
+        return [], next_state, []
+
+    def anonymous(self, match, context, next_state):
+        """Anonymous hyperlink targets."""
+        nodelist, blank_finish = self.anonymous_target(match)
+        self.parent += nodelist
+        self.blank_finish = blank_finish
+        return [], next_state, []
+
+    blank = SpecializedBody.invalid_input
+
+
+class SubstitutionDef(Body):
+
+    """
+    Parser for the contents of a substitution_definition element.
+    """
+
+    patterns = {
+          'embedded_directive': re.compile(r'(%s)::( +|$)'
+                                           % Inliner.simplename, re.UNICODE),
+          'text': r''}
+    initial_transitions = ['embedded_directive', 'text']
+
+    def embedded_directive(self, match, context, next_state):
+        nodelist, blank_finish = self.directive(match,
+                                                alt=self.parent['name'])
+        self.parent += nodelist
+        if not self.state_machine.at_eof():
+            self.blank_finish = blank_finish
+        raise EOFError
+
+    def text(self, match, context, next_state):
+        if not self.state_machine.at_eof():
+            self.blank_finish = self.state_machine.is_next_line_blank()
+        raise EOFError
+
+
+class Text(RSTState):
+
+    """
+    Classifier of second line of a text block.
+
+    Could be a paragraph, a definition list item, or a title.
+    """
+
+    patterns = {'underline': Body.patterns['line'],
+                'text': r''}
+    initial_transitions = [('underline', 'Body'), ('text', 'Body')]
+
+    def blank(self, match, context, next_state):
+        """End of paragraph."""
+        paragraph, literalnext = self.paragraph(
+              context, self.state_machine.abs_line_number() - 1)
+        self.parent += paragraph
+        if literalnext:
+            self.parent += self.literal_block()
+        return [], 'Body', []
+
+    def eof(self, context):
+        if context:
+            self.blank(None, context, None)
+        return []
+
+    def indent(self, match, context, next_state):
+        """Definition list item."""
+        definitionlist = nodes.definition_list()
+        definitionlistitem, blank_finish = self.definition_list_item(context)
+        definitionlist += definitionlistitem
+        self.parent += definitionlist
+        offset = self.state_machine.line_offset + 1   # next line
+        newline_offset, blank_finish = self.nested_list_parse(
+              self.state_machine.input_lines[offset:],
+              input_offset=self.state_machine.abs_line_offset() + 1,
+              node=definitionlist, initial_state='DefinitionList',
+              blank_finish=blank_finish, blank_finish_state='Definition')
+        self.goto_line(newline_offset)
+        if not blank_finish:
+            self.parent += self.unindent_warning('Definition list')
+        return [], 'Body', []
+
+    def underline(self, match, context, next_state):
+        """Section title."""
+        lineno = self.state_machine.abs_line_number()
+        title = context[0].rstrip()
+        underline = match.string.rstrip()
+        source = title + '\n' + underline
+        messages = []
+        if len(title) > len(underline):
+            if len(underline) < 4:
+                if self.state_machine.match_titles:
+                    msg = self.reporter.info(
+                        'Possible title underline, too short for the title.\n'
+                        "Treating it as ordinary text because it's so short.",
+                        line=lineno)
+                    self.parent += msg
+                raise statemachine.TransitionCorrection('text')
+            else:
+                blocktext = context[0] + '\n' + self.state_machine.line
+                msg = self.reporter.warning(
+                    'Title underline too short.',
+                    nodes.literal_block(blocktext, blocktext), line=lineno)
+                messages.append(msg)
+        if not self.state_machine.match_titles:
+            blocktext = context[0] + '\n' + self.state_machine.line
+            msg = self.reporter.severe(
+                'Unexpected section title.',
+                nodes.literal_block(blocktext, blocktext), line=lineno)
+            self.parent += messages
+            self.parent += msg
+            return [], next_state, []
+        style = underline[0]
+        context[:] = []
+        self.section(title, source, style, lineno - 1, messages)
+        return [], next_state, []
+
+    def text(self, match, context, next_state):
+        """Paragraph."""
+        startline = self.state_machine.abs_line_number() - 1
+        msg = None
+        try:
+            block = self.state_machine.get_text_block(flush_left=1)
+        except statemachine.UnexpectedIndentationError, instance:
+            block, source, lineno = instance.args
+            msg = self.reporter.error('Unexpected indentation.',
+                                      source=source, line=lineno)
+        lines = context + list(block)
+        paragraph, literalnext = self.paragraph(lines, startline)
+        self.parent += paragraph
+        self.parent += msg
+        if literalnext:
+            try:
+                self.state_machine.next_line()
+            except EOFError:
+                pass
+            self.parent += self.literal_block()
+        return [], next_state, []
+
+    def literal_block(self):
+        """Return a list of nodes."""
+        indented, indent, offset, blank_finish = \
+              self.state_machine.get_indented()
+        nodelist = []
+        while indented and not indented[-1].strip():
+            indented.trim_end()
+        if indented:
+            data = '\n'.join(indented)
+            nodelist.append(nodes.literal_block(data, data))
+            if not blank_finish:
+                nodelist.append(self.unindent_warning('Literal block'))
+        else:
+            nodelist.append(self.reporter.warning(
+                  'Literal block expected; none found.',
+                  line=self.state_machine.abs_line_number()))
+        return nodelist
+
+    def definition_list_item(self, termline):
+        indented, indent, line_offset, blank_finish = \
+              self.state_machine.get_indented()
+        definitionlistitem = nodes.definition_list_item(
+            '\n'.join(termline + list(indented)))
+        lineno = self.state_machine.abs_line_number() - 1
+        definitionlistitem.line = lineno
+        termlist, messages = self.term(termline, lineno)
+        definitionlistitem += termlist
+        definition = nodes.definition('', *messages)
+        definitionlistitem += definition
+        if termline[0][-2:] == '::':
+            definition += self.reporter.info(
+                  'Blank line missing before literal block? Interpreted as a '
+                  'definition list item.', line=line_offset + 1)
+        self.nested_parse(indented, input_offset=line_offset, node=definition)
+        return definitionlistitem, blank_finish
+
+    def term(self, lines, lineno):
+        """Return a definition_list's term and optional classifier."""
+        assert len(lines) == 1
+        text_nodes, messages = self.inline_text(lines[0], lineno)
+        term_node = nodes.term()
+        node_list = [term_node]
+        for i in range(len(text_nodes)):
+            node = text_nodes[i]
+            if isinstance(node, nodes.Text):
+                parts = node.rawsource.split(' : ', 1)
+                if len(parts) == 1:
+                    term_node += node
+                else:
+                    term_node += nodes.Text(parts[0].rstrip())
+                    classifier_node = nodes.classifier('', parts[1])
+                    classifier_node += text_nodes[i+1:]
+                    node_list.append(classifier_node)
+                    break
+            else:
+                term_node += node
+        return node_list, messages
+
+
+class SpecializedText(Text):
+
+    """
+    Superclass for second and subsequent lines of Text-variants.
+
+    All transition methods are disabled. Override individual methods in
+    subclasses to re-enable.
+    """
+
+    def eof(self, context):
+        """Incomplete construct."""
+        return []
+
+    def invalid_input(self, match=None, context=None, next_state=None):
+        """Not a compound element member. Abort this state machine."""
+        raise EOFError
+
+    blank = invalid_input
+    indent = invalid_input
+    underline = invalid_input
+    text = invalid_input
+
+
+class Definition(SpecializedText):
+
+    """Second line of potential definition_list_item."""
+
+    def eof(self, context):
+        """Not a definition."""
+        self.state_machine.previous_line(2) # so parent SM can reassess
+        return []
+
+    def indent(self, match, context, next_state):
+        """Definition list item."""
+        definitionlistitem, blank_finish = self.definition_list_item(context)
+        self.parent += definitionlistitem
+        self.blank_finish = blank_finish
+        return [], 'DefinitionList', []
+
+
+class Line(SpecializedText):
+
+    """
+    Second line of over- & underlined section title or transition marker.
+    """
+
+    eofcheck = 1                        # @@@ ???
+    """Set to 0 while parsing sections, so that we don't catch the EOF."""
+
+    def eof(self, context):
+        """Transition marker at end of section or document."""
+        marker = context[0].strip()
+        if self.memo.section_bubble_up_kludge:
+            self.memo.section_bubble_up_kludge = 0
+        elif len(marker) < 4:
+            self.state_correction(context)
+        if self.eofcheck:               # ignore EOFError with sections
+            lineno = self.state_machine.abs_line_number() - 1
+            transition = nodes.transition(context[0])
+            transition.line = lineno
+            self.parent += transition
+            msg = self.reporter.error(
+                  'Document or section may not end with a transition.',
+                  line=lineno)
+            self.parent += msg
+        self.eofcheck = 1
+        return []
+
+    def blank(self, match, context, next_state):
+        """Transition marker."""
+        lineno = self.state_machine.abs_line_number() - 1
+        marker = context[0].strip()
+        if len(marker) < 4:
+            self.state_correction(context)
+        transition = nodes.transition(marker)
+        transition.line = lineno
+        if len(self.parent) == 0:
+            msg = self.reporter.error(
+                  'Document or section may not begin with a transition.',
+                  line=lineno)
+            self.parent += msg
+        elif isinstance(self.parent[-1], nodes.transition):
+            msg = self.reporter.error(
+                  'At least one body element must separate transitions; '
+                  'adjacent transitions not allowed.',
+                  line=lineno)
+            self.parent += msg
+        self.parent += transition
+        return [], 'Body', []
+
+    def text(self, match, context, next_state):
+        """Potential over- & underlined title."""
+        lineno = self.state_machine.abs_line_number() - 1
+        overline = context[0]
+        title = match.string
+        underline = ''
+        try:
+            underline = self.state_machine.next_line()
+        except EOFError:
+            blocktext = overline + '\n' + title
+            if len(overline.rstrip()) < 4:
+                self.short_overline(context, blocktext, lineno, 2)
+            else:
+                msg = self.reporter.severe(
+                    'Incomplete section title.',
+                    nodes.literal_block(blocktext, blocktext), line=lineno)
+                self.parent += msg
+                return [], 'Body', []
+        source = '%s\n%s\n%s' % (overline, title, underline)
+        overline = overline.rstrip()
+        underline = underline.rstrip()
+        if not self.transitions['underline'][0].match(underline):
+            blocktext = overline + '\n' + title + '\n' + underline
+            if len(overline.rstrip()) < 4:
+                self.short_overline(context, blocktext, lineno, 2)
+            else:
+                msg = self.reporter.severe(
+                    'Missing underline for overline.',
+                    nodes.literal_block(source, source), line=lineno)
+                self.parent += msg
+                return [], 'Body', []
+        elif overline != underline:
+            blocktext = overline + '\n' + title + '\n' + underline
+            if len(overline.rstrip()) < 4:
+                self.short_overline(context, blocktext, lineno, 2)
+            else:
+                msg = self.reporter.severe(
+                      'Title overline & underline mismatch.',
+                      nodes.literal_block(source, source), line=lineno)
+                self.parent += msg
+                return [], 'Body', []
+        title = title.rstrip()
+        messages = []
+        if len(title) > len(overline):
+            blocktext = overline + '\n' + title + '\n' + underline
+            if len(overline.rstrip()) < 4:
+                self.short_overline(context, blocktext, lineno, 2)
+            else:
+                msg = self.reporter.warning(
+                      'Title overline too short.',
+                      nodes.literal_block(source, source), line=lineno)
+                messages.append(msg)
+        style = (overline[0], underline[0])
+        self.eofcheck = 0               # @@@ not sure this is correct
+        self.section(title.lstrip(), source, style, lineno + 1, messages)
+        self.eofcheck = 1
+        return [], 'Body', []
+
+    indent = text                       # indented title
+
+    def underline(self, match, context, next_state):
+        overline = context[0]
+        blocktext = overline + '\n' + self.state_machine.line
+        lineno = self.state_machine.abs_line_number() - 1
+        if len(overline.rstrip()) < 4:
+            self.short_overline(context, blocktext, lineno, 1)
+        msg = self.reporter.error(
+              'Invalid section title or transition marker.',
+              nodes.literal_block(blocktext, blocktext), line=lineno)
+        self.parent += msg
+        return [], 'Body', []
+
+    def short_overline(self, context, blocktext, lineno, lines=1):
+        msg = self.reporter.info(
+            'Possible incomplete section title.\nTreating the overline as '
+            "ordinary text because it's so short.", line=lineno)
+        self.parent += msg
+        self.state_correction(context, lines)
+
+    def state_correction(self, context, lines=1):
+        self.state_machine.previous_line(lines)
+        context[:] = []
+        raise statemachine.StateCorrection('Body', 'text')
+
+
+state_classes = (Body, BulletList, DefinitionList, EnumeratedList, FieldList,
+                 OptionList, ExtensionOptions, Explicit, Text, Definition,
+                 Line, SubstitutionDef, RFC2822Body, RFC2822List)
+"""Standard set of State classes used to start `RSTStateMachine`."""
+
+
+def escape2null(text):
+    """Return a string with escape-backslashes converted to nulls."""
+    parts = []
+    start = 0
+    while 1:
+        found = text.find('\\', start)
+        if found == -1:
+            parts.append(text[start:])
+            return ''.join(parts)
+        parts.append(text[start:found])
+        parts.append('\x00' + text[found+1:found+2])
+        start = found + 2               # skip character after escape
+
+def unescape(text, restore_backslashes=0):
+    """
+    Return a string with nulls removed or restored to backslashes.
+    Backslash-escaped spaces are also removed.
+    """
+    if restore_backslashes:
+        return text.replace('\x00', '\\')
+    else:
+        for sep in ['\x00 ', '\x00\n', '\x00']:
+            text = ''.join(text.split(sep))
+        return text

Added: trunk/www/utils/helpers/docutils/docutils/parsers/rst/tableparser.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/parsers/rst/tableparser.py        
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/parsers/rst/tableparser.py        
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,530 @@
+# Author: David Goodger
+# Contact: address@hidden
+# Revision: $Revision: 1.7 $
+# Date: $Date: 2002/11/08 01:34:19 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+This module defines table parser classes,which parse plaintext-graphic tables
+and produce a well-formed data structure suitable for building a CALS table.
+
+:Classes:
+    - `GridTableParser`: Parse fully-formed tables represented with a grid.
+    - `SimpleTableParser`: Parse simple tables, delimited by top & bottom
+      borders.
+
+:Exception class: `TableMarkupError`
+
+:Function:
+    `update_dict_of_lists()`: Merge two dictionaries containing list values.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+import re
+import sys
+from docutils import DataError
+
+
+class TableMarkupError(DataError): pass
+
+
+class TableParser:
+
+    """
+    Abstract superclass for the common parts of the syntax-specific parsers.
+    """
+
+    head_body_separator_pat = None
+    """Matches the row separator between head rows and body rows."""
+
+    def parse(self, block):
+        """
+        Analyze the text `block` and return a table data structure.
+
+        Given a plaintext-graphic table in `block` (list of lines of text; no
+        whitespace padding), parse the table, construct and return the data
+        necessary to construct a CALS table or equivalent.
+
+        Raise `TableMarkupError` if there is any problem with the markup.
+        """
+        self.setup(block)
+        self.find_head_body_sep()
+        self.parse_table()
+        structure = self.structure_from_cells()
+        return structure
+
+    def find_head_body_sep(self):
+        """Look for a head/body row separator line; store the line index."""
+        for i in range(len(self.block)):
+            line = self.block[i]
+            if self.head_body_separator_pat.match(line):
+                if self.head_body_sep:
+                    raise TableMarkupError(
+                        'Multiple head/body row separators in table (at line '
+                        'offset %s and %s); only one allowed.'
+                        % (self.head_body_sep, i))
+                else:
+                    self.head_body_sep = i
+                    self.block[i] = line.replace('=', '-')
+        if self.head_body_sep == 0 or self.head_body_sep == (len(self.block)
+                                                             - 1):
+            raise TableMarkupError('The head/body row separator may not be '
+                                   'the first or last line of the table.')
+
+
+class GridTableParser(TableParser):
+
+    """
+    Parse a grid table using `parse()`.
+
+    Here's an example of a grid table::
+
+        +------------------------+------------+----------+----------+
+        | Header row, column 1   | Header 2   | Header 3 | Header 4 |
+        +========================+============+==========+==========+
+        | body row 1, column 1   | column 2   | column 3 | column 4 |
+        +------------------------+------------+----------+----------+
+        | body row 2             | Cells may span columns.          |
+        +------------------------+------------+---------------------+
+        | body row 3             | Cells may  | - Table cells       |
+        +------------------------+ span rows. | - contain           |
+        | body row 4             |            | - body elements.    |
+        +------------------------+------------+---------------------+
+
+    Intersections use '+', row separators use '-' (except for one optional
+    head/body row separator, which uses '='), and column separators use '|'.
+
+    Passing the above table to the `parse()` method will result in the
+    following data structure::
+
+        ([24, 12, 10, 10],
+         [[(0, 0, 1, ['Header row, column 1']),
+           (0, 0, 1, ['Header 2']),
+           (0, 0, 1, ['Header 3']),
+           (0, 0, 1, ['Header 4'])]],
+         [[(0, 0, 3, ['body row 1, column 1']),
+           (0, 0, 3, ['column 2']),
+           (0, 0, 3, ['column 3']),
+           (0, 0, 3, ['column 4'])],
+          [(0, 0, 5, ['body row 2']),
+           (0, 2, 5, ['Cells may span columns.']),
+           None,
+           None],
+          [(0, 0, 7, ['body row 3']),
+           (1, 0, 7, ['Cells may', 'span rows.', '']),
+           (1, 1, 7, ['- Table cells', '- contain', '- body elements.']),
+           None],
+          [(0, 0, 9, ['body row 4']), None, None, None]])
+
+    The first item is a list containing column widths (colspecs). The second
+    item is a list of head rows, and the third is a list of body rows. Each
+    row contains a list of cells. Each cell is either None (for a cell unused
+    because of another cell's span), or a tuple. A cell tuple contains four
+    items: the number of extra rows used by the cell in a vertical span
+    (morerows); the number of extra columns used by the cell in a horizontal
+    span (morecols); the line offset of the first line of the cell contents;
+    and the cell contents, a list of lines of text.
+    """
+
+    head_body_separator_pat = re.compile(r'\+=[=+]+=\+ *$')
+
+    def setup(self, block):
+        self.block = list(block)        # make a copy; it may be modified
+        self.bottom = len(block) - 1
+        self.right = len(block[0]) - 1
+        self.head_body_sep = None
+        self.done = [-1] * len(block[0])
+        self.cells = []
+        self.rowseps = {0: [0]}
+        self.colseps = {0: [0]}
+
+    def parse_table(self):
+        """
+        Start with a queue of upper-left corners, containing the upper-left
+        corner of the table itself. Trace out one rectangular cell, remember
+        it, and add its upper-right and lower-left corners to the queue of
+        potential upper-left corners of further cells. Process the queue in
+        top-to-bottom order, keeping track of how much of each text column has
+        been seen.
+
+        We'll end up knowing all the row and column boundaries, cell positions
+        and their dimensions.
+        """
+        corners = [(0, 0)]
+        while corners:
+            top, left = corners.pop(0)
+            if top == self.bottom or left == self.right \
+                  or top <= self.done[left]:
+                continue
+            result = self.scan_cell(top, left)
+            if not result:
+                continue
+            bottom, right, rowseps, colseps = result
+            update_dict_of_lists(self.rowseps, rowseps)
+            update_dict_of_lists(self.colseps, colseps)
+            self.mark_done(top, left, bottom, right)
+            cellblock = self.get_cell_block(top, left, bottom, right)
+            self.cells.append((top, left, bottom, right, cellblock))
+            corners.extend([(top, right), (bottom, left)])
+            corners.sort()
+        if not self.check_parse_complete():
+            raise TableMarkupError('Malformed table; parse incomplete.')
+
+    def mark_done(self, top, left, bottom, right):
+        """For keeping track of how much of each text column has been seen."""
+        before = top - 1
+        after = bottom - 1
+        for col in range(left, right):
+            assert self.done[col] == before
+            self.done[col] = after
+
+    def check_parse_complete(self):
+        """Each text column should have been completely seen."""
+        last = self.bottom - 1
+        for col in range(self.right):
+            if self.done[col] != last:
+                return None
+        return 1
+
+    def get_cell_block(self, top, left, bottom, right):
+        """Given the corners, extract the text of a cell."""
+        cellblock = []
+        margin = right
+        for lineno in range(top + 1, bottom):
+            line = self.block[lineno][left + 1 : right].rstrip()
+            cellblock.append(line)
+            if line:
+                margin = min(margin, len(line) - len(line.lstrip()))
+        if 0 < margin < right:
+            cellblock = [line[margin:] for line in cellblock]
+        return cellblock
+
+    def scan_cell(self, top, left):
+        """Starting at the top-left corner, start tracing out a cell."""
+        assert self.block[top][left] == '+'
+        result = self.scan_right(top, left)
+        return result
+
+    def scan_right(self, top, left):
+        """
+        Look for the top-right corner of the cell, and make note of all column
+        boundaries ('+').
+        """
+        colseps = {}
+        line = self.block[top]
+        for i in range(left + 1, self.right + 1):
+            if line[i] == '+':
+                colseps[i] = [top]
+                result = self.scan_down(top, left, i)
+                if result:
+                    bottom, rowseps, newcolseps = result
+                    update_dict_of_lists(colseps, newcolseps)
+                    return bottom, i, rowseps, colseps
+            elif line[i] != '-':
+                return None
+        return None
+
+    def scan_down(self, top, left, right):
+        """
+        Look for the bottom-right corner of the cell, making note of all row
+        boundaries.
+        """
+        rowseps = {}
+        for i in range(top + 1, self.bottom + 1):
+            if self.block[i][right] == '+':
+                rowseps[i] = [right]
+                result = self.scan_left(top, left, i, right)
+                if result:
+                    newrowseps, colseps = result
+                    update_dict_of_lists(rowseps, newrowseps)
+                    return i, rowseps, colseps
+            elif self.block[i][right] != '|':
+                return None
+        return None
+
+    def scan_left(self, top, left, bottom, right):
+        """
+        Noting column boundaries, look for the bottom-left corner of the cell.
+        It must line up with the starting point.
+        """
+        colseps = {}
+        line = self.block[bottom]
+        for i in range(right - 1, left, -1):
+            if line[i] == '+':
+                colseps[i] = [bottom]
+            elif line[i] != '-':
+                return None
+        if line[left] != '+':
+            return None
+        result = self.scan_up(top, left, bottom, right)
+        if result is not None:
+            rowseps = result
+            return rowseps, colseps
+        return None
+
+    def scan_up(self, top, left, bottom, right):
+        """
+        Noting row boundaries, see if we can return to the starting point.
+        """
+        rowseps = {}
+        for i in range(bottom - 1, top, -1):
+            if self.block[i][left] == '+':
+                rowseps[i] = [left]
+            elif self.block[i][left] != '|':
+                return None
+        return rowseps
+
+    def structure_from_cells(self):
+        """
+        From the data colledted by `scan_cell()`, convert to the final data
+        structure.
+        """
+        rowseps = self.rowseps.keys()   # list of row boundaries
+        rowseps.sort()
+        rowindex = {}
+        for i in range(len(rowseps)):
+            rowindex[rowseps[i]] = i    # row boundary -> row number mapping
+        colseps = self.colseps.keys()   # list of column boundaries
+        colseps.sort()
+        colindex = {}
+        for i in range(len(colseps)):
+            colindex[colseps[i]] = i    # column boundary -> col number map
+        colspecs = [(colseps[i] - colseps[i - 1] - 1)
+                    for i in range(1, len(colseps))] # list of column widths
+        # prepare an empty table with the correct number of rows & columns
+        onerow = [None for i in range(len(colseps) - 1)]
+        rows = [onerow[:] for i in range(len(rowseps) - 1)]
+        # keep track of # of cells remaining; should reduce to zero
+        remaining = (len(rowseps) - 1) * (len(colseps) - 1)
+        for top, left, bottom, right, block in self.cells:
+            rownum = rowindex[top]
+            colnum = colindex[left]
+            assert rows[rownum][colnum] is None, (
+                  'Cell (row %s, column %s) already used.'
+                  % (rownum + 1, colnum + 1))
+            morerows = rowindex[bottom] - rownum - 1
+            morecols = colindex[right] - colnum - 1
+            remaining -= (morerows + 1) * (morecols + 1)
+            # write the cell into the table
+            rows[rownum][colnum] = (morerows, morecols, top + 1, block)
+        assert remaining == 0, 'Unused cells remaining.'
+        if self.head_body_sep:          # separate head rows from body rows
+            numheadrows = rowindex[self.head_body_sep]
+            headrows = rows[:numheadrows]
+            bodyrows = rows[numheadrows:]
+        else:
+            headrows = []
+            bodyrows = rows
+        return (colspecs, headrows, bodyrows)
+
+
+class SimpleTableParser(TableParser):
+
+    """
+    Parse a simple table using `parse()`.
+
+    Here's an example of a simple table::
+
+        =====  =====
+        col 1  col 2
+        =====  =====
+        1      Second column of row 1.
+        2      Second column of row 2.
+               Second line of paragraph.
+        3      - Second column of row 3.
+
+               - Second item in bullet
+                 list (row 3, column 2).
+        4 is a span
+        ------------
+        5
+        =====  =====
+
+    Top and bottom borders use '=', column span underlines use '-', column
+    separation is indicated with spaces.
+
+    Passing the above table to the `parse()` method will result in the
+    following data structure, whose interpretation is the same as for
+    `GridTableParser`::
+
+        ([5, 25],
+         [[(0, 0, 1, ['col 1']),
+           (0, 0, 1, ['col 2'])]],
+         [[(0, 0, 3, ['1']),
+           (0, 0, 3, ['Second column of row 1.'])],
+          [(0, 0, 4, ['2']),
+           (0, 0, 4, ['Second column of row 2.',
+                      'Second line of paragraph.'])],
+          [(0, 0, 6, ['3']),
+           (0, 0, 6, ['- Second column of row 3.',
+                      '',
+                      '- Second item in bullet',
+                      '  list (row 3, column 2).'])],
+          [(0, 1, 10, ['4 is a span'])],
+          [(0, 0, 12, ['5']),
+           (0, 0, 12, [''])]])
+    """
+
+    head_body_separator_pat = re.compile('=[ =]*$')
+    span_pat = re.compile('-[ -]*$')
+
+    def setup(self, block):
+        self.block = list(block)        # make a copy; it will be modified
+        # Convert top & bottom borders to column span underlines:
+        self.block[0] = self.block[0].replace('=', '-')
+        self.block[-1] = self.block[-1].replace('=', '-')
+        self.head_body_sep = None
+        self.columns = []
+        self.border_end = None
+        self.table = []
+        self.done = [-1] * len(block[0])
+        self.rowseps = {0: [0]}
+        self.colseps = {0: [0]}
+
+    def parse_table(self):
+        """
+        First determine the column boundaries from the top border, then
+        process rows.  Each row may consist of multiple lines; accumulate
+        lines until a row is complete.  Call `self.parse_row` to finish the
+        job.
+        """
+        # Top border must fully describe all table columns.
+        self.columns = self.parse_columns(self.block[0], 0)
+        self.border_end = self.columns[-1][1]
+        firststart, firstend = self.columns[0]
+        block = self.block[1:]
+        offset = 0
+        # Container for accumulating text lines until a row is complete:
+        rowlines = []
+        while block:
+            line = block.pop(0)
+            offset += 1
+            if self.span_pat.match(line):
+                # Column span underline or border; row is complete.
+                self.parse_row(rowlines, (line.rstrip(), offset))
+                rowlines = []
+            elif line[firststart:firstend].strip():
+                # First column not blank, therefore it's a new row.
+                if rowlines:
+                    self.parse_row(rowlines)
+                rowlines = [(line.rstrip(), offset)]
+            else:
+                # Accumulate lines of incomplete row.
+                rowlines.append((line.rstrip(), offset))
+
+    def parse_columns(self, line, offset):
+        """
+        Given a column span underline, return a list of (begin, end) pairs.
+        """
+        cols = []
+        end = 0
+        while 1:
+            begin = line.find('-', end)
+            end = line.find(' ', begin)
+            if begin < 0:
+                break
+            if end < 0:
+                end = len(line)
+            cols.append((begin, end))
+        if self.columns:
+            if cols[-1][1] != self.border_end:
+                raise TableMarkupError('Column span incomplete at line '
+                                       'offset %s.' % offset)
+            # Allow for an unbounded rightmost column:
+            cols[-1] = (cols[-1][0], self.columns[-1][1])
+        return cols
+
+    def init_row(self, colspec, offset):
+        i = 0
+        cells = []
+        for start, end in colspec:
+            morecols = 0
+            try:
+                assert start == self.columns[i][0]
+                while end != self.columns[i][1]:
+                    i += 1
+                    morecols += 1
+            except (AssertionError, IndexError):
+                raise TableMarkupError('Column span alignment problem at '
+                                       'line offset %s.' % offset)
+            cells.append((0, morecols, offset, []))
+            i += 1
+        return cells
+
+    def parse_row(self, lines, spanline=None):
+        """
+        Given the text `lines` of a row, parse it and append to `self.table`.
+
+        The row is parsed according to the current column spec (either
+        `spanline` if provided or `self.columns`).  For each column, extract
+        text from each line, and check for text in column margins.  Finally,
+        adjust for insigificant whitespace.
+        """
+        while lines and not lines[-1][0]:
+            lines.pop()                 # Remove blank trailing lines.
+        if lines:
+            offset = lines[0][1]
+        elif spanline:
+            offset = spanline[1]
+        else:
+            # No new row, just blank lines.
+            return
+        if spanline:
+            columns = self.parse_columns(*spanline)
+        else:
+            columns = self.columns[:]
+        row = self.init_row(columns, offset)
+        # "Infinite" value for a dummy last column's beginning, used to
+        # check for text overflow:
+        columns.append((sys.maxint, None))
+        lastcol = len(columns) - 2
+        for i in range(len(columns) - 1):
+            start, end = columns[i]
+            nextstart = columns[i+1][0]
+            block = []
+            margin = sys.maxint
+            for line, offset in lines:
+                if i == lastcol and line[end:].strip():
+                    text = line[start:].rstrip()
+                    columns[lastcol] = (start, start + len(text))
+                    self.adjust_last_column(start + len(text))
+                elif line[end:nextstart].strip():
+                    raise TableMarkupError('Text in column margin at line '
+                                           'offset %s.' % offset)
+                else:
+                    text = line[start:end].rstrip()
+                block.append(text)
+                if text:
+                    margin = min(margin, len(text) - len(text.lstrip()))
+            if 0 < margin < sys.maxint:
+                block = [line[margin:] for line in block]
+            row[i][3].extend(block)
+        self.table.append(row)
+
+    def adjust_last_column(self, new_end):
+        start, end = self.columns[-1]
+        if new_end > end:
+            self.columns[-1] = (start, new_end)
+
+    def structure_from_cells(self):
+        colspecs = [end - start for start, end in self.columns]
+        first_body_row = 0
+        if self.head_body_sep:
+            for i in range(len(self.table)):
+                if self.table[i][0][2] > self.head_body_sep:
+                    first_body_row = i
+                    break
+        return (colspecs, self.table[:first_body_row],
+                self.table[first_body_row:])
+
+
+def update_dict_of_lists(master, newdata):
+    """
+    Extend the list values of `master` with those from `newdata`.
+
+    Both parameters must be dictionaries containing list values.
+    """
+    for key, values in newdata.items():
+        master.setdefault(key, []).extend(values)

Added: trunk/www/utils/helpers/docutils/docutils/readers/__init__.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/readers/__init__.py       
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/readers/__init__.py       
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,88 @@
+# Authors: David Goodger; Ueli Schlaepfer
+# Contact: address@hidden
+# Revision: $Revision: 1.13 $
+# Date: $Date: 2002/11/19 02:36:47 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+This package contains Docutils Reader modules.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+import sys
+from docutils import utils, parsers, Component
+from docutils.transforms import universal
+
+
+class Reader(Component):
+
+    """
+    Abstract base class for docutils Readers.
+
+    Each reader module or package must export a subclass also called 'Reader'.
+
+    The three steps of a Reader's responsibility are defined: `scan()`,
+    `parse()`, and `transform()`. Call `read()` to process a document.
+    """
+
+    component_type = 'reader'
+
+    def __init__(self, parser=None, parser_name='restructuredtext'):
+        """
+        Initialize the Reader instance.
+
+        Several instance attributes are defined with dummy initial values.
+        Subclasses may use these attributes as they wish.
+        """
+
+        self.parser = parser
+        """A `parsers.Parser` instance shared by all doctrees.  May be left
+        unspecified if the document source determines the parser."""
+
+        if parser is None and parser_name:
+            self.set_parser(parser_name)
+
+        self.source = None
+        """`docutils.io` IO object, source of input data."""
+
+        self.input = None
+        """Raw text input; either a single string or, for more complex cases,
+        a collection of strings."""
+
+    def set_parser(self, parser_name):
+        """Set `self.parser` by name."""
+        parser_class = parsers.get_parser_class(parser_name)
+        self.parser = parser_class()
+
+    def read(self, source, parser, settings):
+        self.source = source
+        if not self.parser:
+            self.parser = parser
+        self.settings = settings
+        self.input = self.source.read()
+        self.parse()
+        return self.document
+
+    def parse(self):
+        """Parse `self.input` into a document tree."""
+        self.document = document = self.new_document()
+        self.parser.parse(self.input, document)
+        document.current_source = document.current_line = None
+
+    def new_document(self):
+        """Create and return a new empty document tree (root node)."""
+        document = utils.new_document(self.source.source_path, self.settings)
+        return document
+
+
+_reader_aliases = {}
+
+def get_reader_class(reader_name):
+    """Return the Reader class from the `reader_name` module."""
+    reader_name = reader_name.lower()
+    if _reader_aliases.has_key(reader_name):
+        reader_name = _reader_aliases[reader_name]
+    module = __import__(reader_name, globals(), locals())
+    return module.Reader

Added: trunk/www/utils/helpers/docutils/docutils/readers/pep.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/readers/pep.py    2004-03-07 
00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/readers/pep.py    2004-03-07 
06:27:44 UTC (rev 5249)
@@ -0,0 +1,58 @@
+# Author: David Goodger
+# Contact: address@hidden
+# Revision: $Revision: 1.12 $
+# Date: $Date: 2003/06/03 02:17:28 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+Python Enhancement Proposal (PEP) Reader.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+from docutils.readers import standalone
+from docutils.transforms import peps, references
+from docutils.parsers import rst
+
+
+class Inliner(rst.states.Inliner):
+
+    """
+    Extend `rst.Inliner` for local PEP references.
+    """
+
+    pep_url = rst.states.Inliner.pep_url_local
+
+
+class Reader(standalone.Reader):
+
+    supported = ('pep',)
+    """Contexts this reader supports."""
+
+    settings_spec = (
+        'PEP Reader Option Defaults',
+        'The --pep-references and --rfc-references options (for the '
+        'reStructuredText parser) are on by default.',
+        ())
+
+    default_transforms = (references.Substitutions,
+                          peps.Headers,
+                          peps.Contents,
+                          references.ChainedTargets,
+                          references.AnonymousHyperlinks,
+                          references.IndirectHyperlinks,
+                          peps.TargetNotes,
+                          references.Footnotes,
+                          references.ExternalTargets,
+                          references.InternalTargets,)
+
+    settings_default_overrides = {'pep_references': 1, 'rfc_references': 1}
+
+    inliner_class = Inliner
+
+    def __init__(self, parser=None, parser_name=None):
+        """`parser` should be ``None``."""
+        if parser is None:
+            parser = rst.Parser(rfc2822=1, inliner=self.inliner_class())
+        standalone.Reader.__init__(self, parser, '')

Added: trunk/www/utils/helpers/docutils/docutils/readers/python/__init__.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/readers/python/__init__.py        
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/readers/python/__init__.py        
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,19 @@
+# Author: David Goodger
+# Contact: address@hidden
+# Revision: $Revision: 1.1 $
+# Date: $Date: 2002/12/05 02:25:35 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+This package contains the Python Source Reader modules.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+import sys
+import docutils.readers
+
+
+class Reader(docutils.readers.Reader):
+    pass

Added: trunk/www/utils/helpers/docutils/docutils/readers/python/moduleparser.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/readers/python/moduleparser.py    
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/readers/python/moduleparser.py    
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,784 @@
+# Author: David Goodger
+# Contact: address@hidden
+# Revision: $Revision: 1.9 $
+# Date: $Date: 2003/01/04 00:18:58 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+Parser for Python modules.
+
+The `parse_module()` function takes a module's text and file name, runs it
+through the module parser (using compiler.py and tokenize.py) and produces a
+"module documentation tree": a high-level AST full of nodes that are
+interesting from an auto-documentation standpoint.  For example, given this
+module (x.py)::
+
+    # comment
+
+    '''Docstring'''
+
+    '''Additional docstring'''
+
+    __docformat__ = 'reStructuredText'
+
+    a = 1
+    '''Attribute docstring'''
+
+    class C(Super):
+
+        '''C's docstring'''
+
+        class_attribute = 1
+        '''class_attribute's docstring'''
+
+        def __init__(self, text=None):
+            '''__init__'s docstring'''
+
+            self.instance_attribute = (text * 7
+                                       + ' whaddyaknow')
+            '''instance_attribute's docstring'''
+
+
+    def f(x,                            # parameter x
+          y=a*5,                        # parameter y
+          *args):                       # parameter args
+        '''f's docstring'''
+        return [x + item for item in args]
+
+    f.function_attribute = 1
+    '''f.function_attribute's docstring'''
+
+The module parser will produce this module documentation tree::
+
+    <Module filename="test data">
+        <Comment lineno=1>
+            comment
+        <Docstring>
+            Docstring
+        <Docstring lineno="5">
+            Additional docstring
+        <Attribute lineno="7" name="__docformat__">
+            <Expression lineno="7">
+                'reStructuredText'
+        <Attribute lineno="9" name="a">
+            <Expression lineno="9">
+                1
+            <Docstring lineno="10">
+                Attribute docstring
+        <Class bases="Super" lineno="12" name="C">
+            <Docstring lineno="12">
+                C's docstring
+            <Attribute lineno="16" name="class_attribute">
+                <Expression lineno="16">
+                    1
+                <Docstring lineno="17">
+                    class_attribute's docstring
+            <Method lineno="19" name="__init__">
+                <Docstring lineno="19">
+                    __init__'s docstring
+                <ParameterList lineno="19">
+                    <Parameter lineno="19" name="self">
+                    <Parameter lineno="19" name="text">
+                        <Default lineno="19">
+                            None
+                <Attribute lineno="22" name="self.instance_attribute">
+                    <Expression lineno="22">
+                        (text * 7 + ' whaddyaknow')
+                    <Docstring lineno="24">
+                        instance_attribute's docstring
+        <Function lineno="27" name="f">
+            <Docstring lineno="27">
+                f's docstring
+            <ParameterList lineno="27">
+                <Parameter lineno="27" name="x">
+                    <Comment>
+                        # parameter x
+                <Parameter lineno="27" name="y">
+                    <Default lineno="27">
+                        a * 5
+                    <Comment>
+                        # parameter y
+                <ExcessPositionalArguments lineno="27" name="args">
+                    <Comment>
+                        # parameter args
+        <Attribute lineno="33" name="f.function_attribute">
+            <Expression lineno="33">
+                1
+            <Docstring lineno="34">
+                f.function_attribute's docstring
+
+(Comments are not implemented yet.)
+
+compiler.parse() provides most of what's needed for this doctree, and
+"tokenize" can be used to get the rest.  We can determine the line number from
+the compiler.parse() AST, and the TokenParser.rhs(lineno) method provides the
+rest.
+
+The Docutils Python reader component will transform this module doctree into a
+Python-specific Docutils doctree, and then a `stylist transform`_ will
+further transform it into a generic doctree.  Namespaces will have to be
+compiled for each of the scopes, but I'm not certain at what stage of
+processing.
+
+It's very important to keep all docstring processing out of this, so that it's
+a completely generic and not tool-specific.
+
+> Why perform all of those transformations?  Why not go from the AST to a
+> generic doctree?  Or, even from the AST to the final output?
+
+I want the docutils.readers.python.moduleparser.parse_module() function to
+produce a standard documentation-oriented tree that can be used by any tool.
+We can develop it together without having to compromise on the rest of our
+design (i.e., HappyDoc doesn't have to be made to work like Docutils, and
+vice-versa).  It would be a higher-level version of what compiler.py provides.
+
+The Python reader component transforms this generic AST into a Python-specific
+doctree (it knows about modules, classes, functions, etc.), but this is
+specific to Docutils and cannot be used by HappyDoc or others.  The stylist
+transform does the final layout, converting Python-specific structures
+("class" sections, etc.) into a generic doctree using primitives (tables,
+sections, lists, etc.).  This generic doctree does *not* know about Python
+structures any more.  The advantage is that this doctree can be handed off to
+any of the output writers to create any output format we like.
+
+The latter two transforms are separate because I want to be able to have
+multiple independent layout styles (multiple runtime-selectable "stylist
+transforms").  Each of the existing tools (HappyDoc, pydoc, epydoc, Crystal,
+etc.) has its own fixed format.  I personally don't like the tables-based
+format produced by these tools, and I'd like to be able to customize the
+format easily.  That's the goal of stylist transforms, which are independent
+from the Reader component itself.  One stylist transform could produce
+HappyDoc-like output, another could produce output similar to module docs in
+the Python library reference manual, and so on.
+
+It's for exactly this reason:
+
+>> It's very important to keep all docstring processing out of this, so that
+>> it's a completely generic and not tool-specific.
+
+... but it goes past docstring processing.  It's also important to keep style
+decisions and tool-specific data transforms out of this module parser.
+
+
+Issues
+======
+
+* At what point should namespaces be computed?  Should they be part of the
+  basic AST produced by the ASTVisitor walk, or generated by another tree
+  traversal?
+
+* At what point should a distinction be made between local variables &
+  instance attributes in __init__ methods?
+
+* Docstrings are getting their lineno from their parents.  Should the
+  TokenParser find the real line no's?
+
+* Comments: include them?  How and when?  Only full-line comments, or
+  parameter comments too?  (See function "f" above for an example.)
+
+* Module could use more docstrings & refactoring in places.
+
+"""
+
+__docformat__ = 'reStructuredText'
+
+import sys
+import compiler
+import compiler.ast
+import tokenize
+import token
+from compiler.consts import OP_ASSIGN
+from compiler.visitor import ASTVisitor
+from types import StringType, UnicodeType, TupleType
+
+
+def parse_module(module_text, filename):
+    """Return a module documentation tree from `module_text`."""
+    ast = compiler.parse(module_text)
+    token_parser = TokenParser(module_text)
+    visitor = ModuleVisitor(filename, token_parser)
+    compiler.walk(ast, visitor, walker=visitor)
+    return visitor.module
+
+
+class Node:
+
+    """
+    Base class for module documentation tree nodes.
+    """
+
+    def __init__(self, node):
+        self.children = []
+        """List of child nodes."""
+
+        self.lineno = node.lineno
+        """Line number of this node (or ``None``)."""
+
+    def __str__(self, indent='    ', level=0):
+        return ''.join(['%s%s\n' % (indent * level, repr(self))] +
+                       [child.__str__(indent, level+1)
+                        for child in self.children])
+
+    def __repr__(self):
+        parts = [self.__class__.__name__]
+        for name, value in self.attlist():
+            parts.append('%s="%s"' % (name, value))
+        return '<%s>' % ' '.join(parts)
+
+    def attlist(self, **atts):
+        if self.lineno is not None:
+            atts['lineno'] = self.lineno
+        attlist = atts.items()
+        attlist.sort()
+        return attlist
+
+    def append(self, node):
+        self.children.append(node)
+
+    def extend(self, node_list):
+        self.children.extend(node_list)
+
+
+class TextNode(Node):
+
+    def __init__(self, node, text):
+        Node.__init__(self, node)
+        self.text = trim_docstring(text)
+
+    def __str__(self, indent='    ', level=0):
+        prefix = indent * (level + 1)
+        text = '\n'.join([prefix + line for line in self.text.splitlines()])
+        return Node.__str__(self, indent, level) + text + '\n'
+
+
+class Module(Node):
+
+    def __init__(self, node, filename):
+        Node.__init__(self, node)
+        self.filename = filename
+
+    def attlist(self):
+        return Node.attlist(self, filename=self.filename)
+
+
+class Docstring(TextNode): pass
+
+
+class Comment(TextNode): pass
+
+
+class Import(Node):
+
+    def __init__(self, node, names, from_name=None):
+        Node.__init__(self, node)
+        self.names = names
+        self.from_name = from_name
+
+    def __str__(self, indent='    ', level=0):
+        prefix = indent * (level + 1)
+        lines = []
+        for name, as in self.names:
+            if as:
+                lines.append('%s%s as %s' % (prefix, name, as))
+            else:
+                lines.append('%s%s' % (prefix, name))
+        text = '\n'.join(lines)
+        return Node.__str__(self, indent, level) + text + '\n'
+
+    def attlist(self):
+        if self.from_name:
+            atts = {'from': self.from_name}
+        else:
+            atts = {}
+        return Node.attlist(self, **atts)
+
+
+class Attribute(Node):
+
+    def __init__(self, node, name):
+        Node.__init__(self, node)
+        self.name = name
+
+    def attlist(self):
+        return Node.attlist(self, name=self.name)
+
+
+class AttributeTuple(Node):
+
+    def __init__(self, node, names):
+        Node.__init__(self, node)
+        self.names = names
+
+    def attlist(self):
+        return Node.attlist(self, names=' '.join(self.names))
+
+
+class Expression(TextNode):
+
+    def __str__(self, indent='    ', level=0):
+        prefix = indent * (level + 1)
+        return '%s%s%s\n' % (Node.__str__(self, indent, level),
+                             prefix, self.text.encode('unicode-escape'))
+
+
+class Function(Attribute): pass
+
+
+class ParameterList(Node): pass
+
+
+class Parameter(Attribute): pass
+
+
+class ParameterTuple(AttributeTuple):
+
+    def attlist(self):
+        return Node.attlist(self, names=normalize_parameter_name(self.names))
+
+
+class ExcessPositionalArguments(Parameter): pass
+
+
+class ExcessKeywordArguments(Parameter): pass
+
+
+class Default(Expression): pass
+
+
+class Class(Node):
+
+    def __init__(self, node, name, bases=None):
+        Node.__init__(self, node)
+        self.name = name
+        self.bases = bases or []
+
+    def attlist(self):
+        atts = {'name': self.name}
+        if self.bases:
+            atts['bases'] = ' '.join(self.bases)
+        return Node.attlist(self, **atts)
+
+
+class Method(Function): pass
+
+
+class BaseVisitor(ASTVisitor):
+
+    def __init__(self, token_parser):
+        ASTVisitor.__init__(self)
+        self.token_parser = token_parser
+        self.context = []
+        self.documentable = None
+
+    def default(self, node, *args):
+        self.documentable = None
+        #print 'in default (%s)' % node.__class__.__name__
+        #ASTVisitor.default(self, node, *args)
+
+    def default_visit(self, node, *args):
+        #print 'in default_visit (%s)' % node.__class__.__name__
+        ASTVisitor.default(self, node, *args)
+
+
+class DocstringVisitor(BaseVisitor):
+
+    def visitDiscard(self, node):
+        if self.documentable:
+            self.visit(node.expr)
+
+    def visitConst(self, node):
+        if self.documentable:
+            if type(node.value) in (StringType, UnicodeType):
+                self.documentable.append(Docstring(node, node.value))
+            else:
+                self.documentable = None
+
+    def visitStmt(self, node):
+        self.default_visit(node)
+
+
+class AssignmentVisitor(DocstringVisitor):
+
+    def visitAssign(self, node):
+        visitor = AttributeVisitor(self.token_parser)
+        compiler.walk(node, visitor, walker=visitor)
+        if visitor.attributes:
+            self.context[-1].extend(visitor.attributes)
+        if len(visitor.attributes) == 1:
+            self.documentable = visitor.attributes[0]
+        else:
+            self.documentable = None
+
+
+class ModuleVisitor(AssignmentVisitor):
+
+    def __init__(self, filename, token_parser):
+        AssignmentVisitor.__init__(self, token_parser)
+        self.filename = filename
+        self.module = None
+
+    def visitModule(self, node):
+        self.module = module = Module(node, self.filename)
+        if node.doc is not None:
+            module.append(Docstring(node, node.doc))
+        self.context.append(module)
+        self.documentable = module
+        self.visit(node.node)
+        self.context.pop()
+
+    def visitImport(self, node):
+        self.context[-1].append(Import(node, node.names))
+        self.documentable = None
+
+    def visitFrom(self, node):
+        self.context[-1].append(
+            Import(node, node.names, from_name=node.modname))
+        self.documentable = None
+
+    def visitFunction(self, node):
+        visitor = FunctionVisitor(self.token_parser)
+        compiler.walk(node, visitor, walker=visitor)
+        self.context[-1].append(visitor.function)
+
+    def visitClass(self, node):
+        visitor = ClassVisitor(self.token_parser)
+        compiler.walk(node, visitor, walker=visitor)
+        self.context[-1].append(visitor.klass)
+
+
+class AttributeVisitor(BaseVisitor):
+
+    def __init__(self, token_parser):
+        BaseVisitor.__init__(self, token_parser)
+        self.attributes = []
+
+    def visitAssign(self, node):
+        # Don't visit the expression itself, just the attribute nodes:
+        for child in node.nodes:
+            self.dispatch(child)
+        expression_text = self.token_parser.rhs(node.lineno)
+        expression = Expression(node, expression_text)
+        for attribute in self.attributes:
+            attribute.append(expression)
+
+    def visitAssName(self, node):
+        self.attributes.append(Attribute(node, node.name))
+
+    def visitAssTuple(self, node):
+        attributes = self.attributes
+        self.attributes = []
+        self.default_visit(node)
+        names = [attribute.name for attribute in self.attributes]
+        att_tuple = AttributeTuple(node, names)
+        att_tuple.lineno = self.attributes[0].lineno
+        self.attributes = attributes
+        self.attributes.append(att_tuple)
+
+    def visitAssAttr(self, node):
+        self.default_visit(node, node.attrname)
+
+    def visitGetattr(self, node, suffix):
+        self.default_visit(node, node.attrname + '.' + suffix)
+
+    def visitName(self, node, suffix):
+        self.attributes.append(Attribute(node, node.name + '.' + suffix))
+
+
+class FunctionVisitor(DocstringVisitor):
+
+    in_function = 0
+    function_class = Function
+
+    def visitFunction(self, node):
+        if self.in_function:
+            self.documentable = None
+            # Don't bother with nested function definitions.
+            return
+        self.in_function = 1
+        self.function = function = self.function_class(node, node.name)
+        if node.doc is not None:
+            function.append(Docstring(node, node.doc))
+        self.context.append(function)
+        self.documentable = function
+        self.parse_parameter_list(node)
+        self.visit(node.code)
+        self.context.pop()
+
+    def parse_parameter_list(self, node):
+        parameters = []
+        special = []
+        argnames = list(node.argnames)
+        if node.kwargs:
+            special.append(ExcessKeywordArguments(node, argnames[-1]))
+            argnames.pop()
+        if node.varargs:
+            special.append(ExcessPositionalArguments(node, argnames[-1]))
+            argnames.pop()
+        defaults = list(node.defaults)
+        defaults = [None] * (len(argnames) - len(defaults)) + defaults
+        function_parameters = self.token_parser.function_parameters(
+            node.lineno)
+        #print >>sys.stderr, function_parameters
+        for argname, default in zip(argnames, defaults):
+            if type(argname) is TupleType:
+                parameter = ParameterTuple(node, argname)
+                argname = normalize_parameter_name(argname)
+            else:
+                parameter = Parameter(node, argname)
+            if default:
+                parameter.append(Default(node, function_parameters[argname]))
+            parameters.append(parameter)
+        if parameters or special:
+            special.reverse()
+            parameters.extend(special)
+            parameter_list = ParameterList(node)
+            parameter_list.extend(parameters)
+            self.function.append(parameter_list)
+
+
+class ClassVisitor(AssignmentVisitor):
+
+    in_class = 0
+
+    def __init__(self, token_parser):
+        AssignmentVisitor.__init__(self, token_parser)
+        self.bases = []
+
+    def visitClass(self, node):
+        if self.in_class:
+            self.documentable = None
+            # Don't bother with nested class definitions.
+            return
+        self.in_class = 1
+        #import mypdb as pdb
+        #pdb.set_trace()
+        for base in node.bases:
+            self.visit(base)
+        self.klass = klass = Class(node, node.name, self.bases)
+        if node.doc is not None:
+            klass.append(Docstring(node, node.doc))
+        self.context.append(klass)
+        self.documentable = klass
+        self.visit(node.code)
+        self.context.pop()
+
+    def visitGetattr(self, node, suffix=None):
+        if suffix:
+            name = node.attrname + '.' + suffix
+        else:
+            name = node.attrname
+        self.default_visit(node, name)
+
+    def visitName(self, node, suffix=None):
+        if suffix:
+            name = node.name + '.' + suffix
+        else:
+            name = node.name
+        self.bases.append(name)
+
+    def visitFunction(self, node):
+        if node.name == '__init__':
+            visitor = InitMethodVisitor(self.token_parser)
+        else:
+            visitor = MethodVisitor(self.token_parser)
+        compiler.walk(node, visitor, walker=visitor)
+        self.context[-1].append(visitor.function)
+
+
+class MethodVisitor(FunctionVisitor):
+
+    function_class = Method
+
+
+class InitMethodVisitor(MethodVisitor, AssignmentVisitor): pass
+
+
+class TokenParser:
+
+    def __init__(self, text):
+        self.text = text + '\n\n'
+        self.lines = self.text.splitlines(1)
+        self.generator = tokenize.generate_tokens(iter(self.lines).next)
+        self.next()
+
+    def __iter__(self):
+        return self
+
+    def next(self):
+        self.token = self.generator.next()
+        self.type, self.string, self.start, self.end, self.line = self.token
+        return self.token
+
+    def goto_line(self, lineno):
+        while self.start[0] < lineno:
+            self.next()
+        return token
+
+    def rhs(self, lineno):
+        """
+        Return a whitespace-normalized expression string from the right-hand
+        side of an assignment at line `lineno`.
+        """
+        self.goto_line(lineno)
+        while self.string != '=':
+            self.next()
+        self.stack = None
+        while self.type != token.NEWLINE and self.string != ';':
+            if self.string == '=' and not self.stack:
+                self.tokens = []
+                self.stack = []
+                self._type = None
+                self._string = None
+                self._backquote = 0
+            else:
+                self.note_token()
+            self.next()
+        self.next()
+        text = ''.join(self.tokens)
+        return text.strip()
+
+    closers = {')': '(', ']': '[', '}': '{'}
+    openers = {'(': 1, '[': 1, '{': 1}
+    del_ws_prefix = {'.': 1, '=': 1, ')': 1, ']': 1, '}': 1, ':': 1, ',': 1}
+    no_ws_suffix = {'.': 1, '=': 1, '(': 1, '[': 1, '{': 1}
+
+    def note_token(self):
+        if self.type == tokenize.NL:
+            return
+        del_ws = self.del_ws_prefix.has_key(self.string)
+        append_ws = not self.no_ws_suffix.has_key(self.string)
+        if self.openers.has_key(self.string):
+            self.stack.append(self.string)
+            if (self._type == token.NAME
+                or self.closers.has_key(self._string)):
+                del_ws = 1
+        elif self.closers.has_key(self.string):
+            assert self.stack[-1] == self.closers[self.string]
+            self.stack.pop()
+        elif self.string == '`':
+            if self._backquote:
+                del_ws = 1
+                assert self.stack[-1] == '`'
+                self.stack.pop()
+            else:
+                append_ws = 0
+                self.stack.append('`')
+            self._backquote = not self._backquote
+        if del_ws and self.tokens and self.tokens[-1] == ' ':
+            del self.tokens[-1]
+        self.tokens.append(self.string)
+        self._type = self.type
+        self._string = self.string
+        if append_ws:
+            self.tokens.append(' ')
+
+    def function_parameters(self, lineno):
+        """
+        Return a dictionary mapping parameters to defaults
+        (whitespace-normalized strings).
+        """
+        self.goto_line(lineno)
+        while self.string != 'def':
+            self.next()
+        while self.string != '(':
+            self.next()
+        name = None
+        default = None
+        parameter_tuple = None
+        self.tokens = []
+        parameters = {}
+        self.stack = [self.string]
+        self.next()
+        while 1:
+            if len(self.stack) == 1:
+                if parameter_tuple:
+                    # Just encountered ")".
+                    #print >>sys.stderr, 'parameter_tuple: %r' % self.tokens
+                    name = ''.join(self.tokens).strip()
+                    self.tokens = []
+                    parameter_tuple = None
+                if self.string in (')', ','):
+                    if name:
+                        if self.tokens:
+                            default_text = ''.join(self.tokens).strip()
+                        else:
+                            default_text = None
+                        parameters[name] = default_text
+                        self.tokens = []
+                        name = None
+                        default = None
+                    if self.string == ')':
+                        break
+                elif self.type == token.NAME:
+                    if name and default:
+                        self.note_token()
+                    else:
+                        assert name is None, (
+                            'token=%r name=%r parameters=%r stack=%r'
+                            % (self.token, name, parameters, self.stack))
+                        name = self.string
+                        #print >>sys.stderr, 'name=%r' % name
+                elif self.string == '=':
+                    assert name is not None, 'token=%r' % (self.token,)
+                    assert default is None, 'token=%r' % (self.token,)
+                    assert self.tokens == [], 'token=%r' % (self.token,)
+                    default = 1
+                    self._type = None
+                    self._string = None
+                    self._backquote = 0
+                elif name:
+                    self.note_token()
+                elif self.string == '(':
+                    parameter_tuple = 1
+                    self._type = None
+                    self._string = None
+                    self._backquote = 0
+                    self.note_token()
+                else:                   # ignore these tokens:
+                    assert (self.string in ('*', '**', '\n') 
+                            or self.type == tokenize.COMMENT), (
+                        'token=%r' % (self.token,))
+            else:
+                self.note_token()
+            self.next()
+        return parameters
+
+
+def trim_docstring(text):
+    """
+    Trim indentation and blank lines from docstring text & return it.
+
+    See PEP 257.
+    """
+    if not text:
+        return text
+    # Convert tabs to spaces (following the normal Python rules)
+    # and split into a list of lines:
+    lines = text.expandtabs().splitlines()
+    # Determine minimum indentation (first line doesn't count):
+    indent = sys.maxint
+    for line in lines[1:]:
+        stripped = line.lstrip()
+        if stripped:
+            indent = min(indent, len(line) - len(stripped))
+    # Remove indentation (first line is special):
+    trimmed = [lines[0].strip()]
+    if indent < sys.maxint:
+        for line in lines[1:]:
+            trimmed.append(line[indent:].rstrip())
+    # Strip off trailing and leading blank lines:
+    while trimmed and not trimmed[-1]:
+        trimmed.pop()
+    while trimmed and not trimmed[0]:
+        trimmed.pop(0)
+    # Return a single string:
+    return '\n'.join(trimmed)
+
+def normalize_parameter_name(name):
+    """
+    Converts a tuple like ``('a', ('b', 'c'), 'd')`` into ``'(a, (b, c), d)'``
+    """
+    if type(name) is TupleType:
+        return '(%s)' % ', '.join([normalize_parameter_name(n) for n in name])
+    else:
+        return name

Added: trunk/www/utils/helpers/docutils/docutils/readers/standalone.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/readers/standalone.py     
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/readers/standalone.py     
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,49 @@
+# Author: David Goodger
+# Contact: address@hidden
+# Revision: $Revision: 1.7 $
+# Date: $Date: 2003/05/29 15:17:58 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+Standalone file Reader for the reStructuredText markup syntax.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+import sys
+from docutils import readers
+from docutils.transforms import frontmatter, references
+from docutils.parsers.rst import Parser
+
+
+class Reader(readers.Reader):
+
+    supported = ('standalone',)
+    """Contexts this reader supports."""
+
+    document = None
+    """A single document tree."""
+
+    settings_spec = (
+        'Standalone Reader',
+        None,
+        (('Disable the promotion of a lone top-level section title to '
+          'document title (and subsequent section title to document '
+          'subtitle promotion; enabled by default).',
+          ['--no-doc-title'],
+          {'dest': 'doctitle_xform', 'action': 'store_false', 'default': 1}),
+         ('Disable the bibliographic field list transform (enabled by '
+          'default).',
+          ['--no-doc-info'],
+          {'dest': 'docinfo_xform', 'action': 'store_false', 'default': 1}),))
+
+    default_transforms = (references.Substitutions,
+                          frontmatter.DocTitle,
+                          frontmatter.DocInfo,
+                          references.ChainedTargets,
+                          references.AnonymousHyperlinks,
+                          references.IndirectHyperlinks,
+                          references.Footnotes,
+                          references.ExternalTargets,
+                          references.InternalTargets,)

Added: trunk/www/utils/helpers/docutils/docutils/statemachine.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/statemachine.py   2004-03-07 
00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/statemachine.py   2004-03-07 
06:27:44 UTC (rev 5249)
@@ -0,0 +1,1450 @@
+# Author: David Goodger
+# Contact: address@hidden
+# Revision: $Revision: 1.13 $
+# Date: $Date: 2003/01/01 15:50:23 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+A finite state machine specialized for regular-expression-based text filters,
+this module defines the following classes:
+
+- `StateMachine`, a state machine
+- `State`, a state superclass
+- `StateMachineWS`, a whitespace-sensitive version of `StateMachine`
+- `StateWS`, a state superclass for use with `StateMachineWS`
+- `SearchStateMachine`, uses `re.search()` instead of `re.match()`
+- `SearchStateMachineWS`, uses `re.search()` instead of `re.match()`
+- `ViewList`, extends standard Python lists.
+- `StringList`, string-specific ViewList.
+
+Exception classes:
+
+- `StateMachineError`
+- `UnknownStateError`
+- `DuplicateStateError`
+- `UnknownTransitionError`
+- `DuplicateTransitionError`
+- `TransitionPatternNotFound`
+- `TransitionMethodNotFound`
+- `UnexpectedIndentationError`
+- `TransitionCorrection`: Raised to switch to another transition.
+- `StateCorrection`: Raised to switch to another state & transition.
+
+Functions:
+
+- `string2lines()`: split a multi-line string into a list of one-line strings
+
+
+How To Use This Module
+======================
+(See the individual classes, methods, and attributes for details.)
+
+1. Import it: ``import statemachine`` or ``from statemachine import ...``.
+   You will also need to ``import re``.
+
+2. Derive a subclass of `State` (or `StateWS`) for each state in your state
+   machine::
+
+       class MyState(statemachine.State):
+
+   Within the state's class definition:
+
+   a) Include a pattern for each transition, in `State.patterns`::
+
+          patterns = {'atransition': r'pattern', ...}
+
+   b) Include a list of initial transitions to be set up automatically, in
+      `State.initial_transitions`::
+
+          initial_transitions = ['atransition', ...]
+
+   c) Define a method for each transition, with the same name as the
+      transition pattern::
+
+          def atransition(self, match, context, next_state):
+              # do something
+              result = [...]  # a list
+              return context, next_state, result
+              # context, next_state may be altered
+
+      Transition methods may raise an `EOFError` to cut processing short.
+
+   d) You may wish to override the `State.bof()` and/or `State.eof()` implicit
+      transition methods, which handle the beginning- and end-of-file.
+
+   e) In order to handle nested processing, you may wish to override the
+      attributes `State.nested_sm` and/or `State.nested_sm_kwargs`.
+
+      If you are using `StateWS` as a base class, in order to handle nested
+      indented blocks, you may wish to:
+
+      - override the attributes `StateWS.indent_sm`,
+        `StateWS.indent_sm_kwargs`, `StateWS.known_indent_sm`, and/or
+        `StateWS.known_indent_sm_kwargs`;
+      - override the `StateWS.blank()` method; and/or
+      - override or extend the `StateWS.indent()`, `StateWS.known_indent()`,
+        and/or `StateWS.firstknown_indent()` methods.
+
+3. Create a state machine object::
+
+       sm = StateMachine(state_classes=[MyState, ...],
+                         initial_state='MyState')
+
+4. Obtain the input text, which needs to be converted into a tab-free list of
+   one-line strings. For example, to read text from a file called
+   'inputfile'::
+
+       input_string = open('inputfile').read()
+       input_lines = statemachine.string2lines(input_string)
+
+5. Run the state machine on the input text and collect the results, a list::
+
+       results = sm.run(input_lines)
+
+6. Remove any lingering circular references::
+
+       sm.unlink()
+"""
+
+__docformat__ = 'restructuredtext'
+
+import sys
+import re
+from types import SliceType as _SliceType
+
+
+class StateMachine:
+
+    """
+    A finite state machine for text filters using regular expressions.
+
+    The input is provided in the form of a list of one-line strings (no
+    newlines). States are subclasses of the `State` class. Transitions consist
+    of regular expression patterns and transition methods, and are defined in
+    each state.
+
+    The state machine is started with the `run()` method, which returns the
+    results of processing in a list.
+    """
+
+    def __init__(self, state_classes, initial_state, debug=0):
+        """
+        Initialize a `StateMachine` object; add state objects.
+
+        Parameters:
+
+        - `state_classes`: a list of `State` (sub)classes.
+        - `initial_state`: a string, the class name of the initial state.
+        - `debug`: a boolean; produce verbose output if true (nonzero).
+        """
+
+        self.input_lines = None
+        """`StringList` of input lines (without newlines).
+        Filled by `self.run()`."""
+
+        self.input_offset = 0
+        """Offset of `self.input_lines` from the beginning of the file."""
+
+        self.line = None
+        """Current input line."""
+
+        self.line_offset = -1
+        """Current input line offset from beginning of `self.input_lines`."""
+
+        self.debug = debug
+        """Debugging mode on/off."""
+
+        self.initial_state = initial_state
+        """The name of the initial state (key to `self.states`)."""
+
+        self.current_state = initial_state
+        """The name of the current state (key to `self.states`)."""
+
+        self.states = {}
+        """Mapping of {state_name: State_object}."""
+
+        self.add_states(state_classes)
+
+        self.observers = []
+        """List of bound methods or functions to call whenever the current
+        line changes.  Observers are called with one argument, ``self``.
+        Cleared at the end of `run()`."""
+
+    def unlink(self):
+        """Remove circular references to objects no longer required."""
+        for state in self.states.values():
+            state.unlink()
+        self.states = None
+
+    def run(self, input_lines, input_offset=0, context=None,
+            input_source=None):
+        """
+        Run the state machine on `input_lines`. Return results (a list).
+
+        Reset `self.line_offset` and `self.current_state`. Run the
+        beginning-of-file transition. Input one line at a time and check for a
+        matching transition. If a match is found, call the transition method
+        and possibly change the state. Store the context returned by the
+        transition method to be passed on to the next transition matched.
+        Accumulate the results returned by the transition methods in a list.
+        Run the end-of-file transition. Finally, return the accumulated
+        results.
+
+        Parameters:
+
+        - `input_lines`: a list of strings without newlines, or `StringList`.
+        - `input_offset`: the line offset of `input_lines` from the beginning
+          of the file.
+        - `context`: application-specific storage.
+        - `input_source`: name or path of source of `input_lines`.
+        """
+        self.runtime_init()
+        if isinstance(input_lines, StringList):
+            self.input_lines = input_lines
+        else:
+            self.input_lines = StringList(input_lines, source=input_source)
+        self.input_offset = input_offset
+        self.line_offset = -1
+        self.current_state = self.initial_state
+        if self.debug:
+            print >>sys.stderr, (
+                '\nStateMachine.run: input_lines (line_offset=%s):\n| %s'
+                % (self.line_offset, '\n| '.join(self.input_lines)))
+        transitions = None
+        results = []
+        state = self.get_state()
+        try:
+            if self.debug:
+                print >>sys.stderr, ('\nStateMachine.run: bof transition')
+            context, result = state.bof(context)
+            results.extend(result)
+            while 1:
+                try:
+                    try:
+                        self.next_line()
+                        if self.debug:
+                            source, offset = self.input_lines.info(
+                                self.line_offset)
+                            print >>sys.stderr, (
+                                '\nStateMachine.run: line (source=%r, '
+                                'offset=%r):\n| %s'
+                                % (source, offset, self.line))
+                        context, next_state, result = self.check_line(
+                            context, state, transitions)
+                    except EOFError:
+                        if self.debug:
+                            print >>sys.stderr, (
+                                '\nStateMachine.run: %s.eof transition'
+                                % state.__class__.__name__)
+                        result = state.eof(context)
+                        results.extend(result)
+                        break
+                    else:
+                        results.extend(result)
+                except TransitionCorrection, exception:
+                    self.previous_line() # back up for another try
+                    transitions = (exception.args[0],)
+                    if self.debug:
+                        print >>sys.stderr, (
+                              '\nStateMachine.run: TransitionCorrection to '
+                              'state "%s", transition %s.'
+                              % (state.__class__.__name__, transitions[0]))
+                    continue
+                except StateCorrection, exception:
+                    self.previous_line() # back up for another try
+                    next_state = exception.args[0]
+                    if len(exception.args) == 1:
+                        transitions = None
+                    else:
+                        transitions = (exception.args[1],)
+                    if self.debug:
+                        print >>sys.stderr, (
+                              '\nStateMachine.run: StateCorrection to state '
+                              '"%s", transition %s.'
+                              % (next_state, transitions[0]))
+                else:
+                    transitions = None
+                state = self.get_state(next_state)
+        except:
+            self.error()
+            raise
+        self.observers = []
+        return results
+
+    def get_state(self, next_state=None):
+        """
+        Return current state object; set it first if `next_state` given.
+
+        Parameter `next_state`: a string, the name of the next state.
+
+        Exception: `UnknownStateError` raised if `next_state` unknown.
+        """
+        if next_state:
+            if self.debug and next_state != self.current_state:
+                print >>sys.stderr, \
+                      ('\nStateMachine.get_state: Changing state from '
+                       '"%s" to "%s" (input line %s).'
+                       % (self.current_state, next_state,
+                          self.abs_line_number()))
+            self.current_state = next_state
+        try:
+            return self.states[self.current_state]
+        except KeyError:
+            raise UnknownStateError(self.current_state)
+
+    def next_line(self, n=1):
+        """Load `self.line` with the `n`'th next line and return it."""
+        try:
+            try:
+                self.line_offset += n
+                self.line = self.input_lines[self.line_offset]
+            except IndexError:
+                self.line = None
+                raise EOFError
+            return self.line
+        finally:
+            self.notify_observers()
+
+    def is_next_line_blank(self):
+        """Return 1 if the next line is blank or non-existant."""
+        try:
+            return not self.input_lines[self.line_offset + 1].strip()
+        except IndexError:
+            return 1
+
+    def at_eof(self):
+        """Return 1 if the input is at or past end-of-file."""
+        return self.line_offset >= len(self.input_lines) - 1
+
+    def at_bof(self):
+        """Return 1 if the input is at or before beginning-of-file."""
+        return self.line_offset <= 0
+
+    def previous_line(self, n=1):
+        """Load `self.line` with the `n`'th previous line and return it."""
+        self.line_offset -= n
+        if self.line_offset < 0:
+            self.line = None
+        else:
+            self.line = self.input_lines[self.line_offset]
+        self.notify_observers()
+        return self.line
+
+    def goto_line(self, line_offset):
+        """Jump to absolute line offset `line_offset`, load and return it."""
+        try:
+            try:
+                self.line_offset = line_offset - self.input_offset
+                self.line = self.input_lines[self.line_offset]
+            except IndexError:
+                self.line = None
+                raise EOFError
+            return self.line
+        finally:
+            self.notify_observers()
+
+    def abs_line_offset(self):
+        """Return line offset of current line, from beginning of file."""
+        return self.line_offset + self.input_offset
+
+    def abs_line_number(self):
+        """Return line number of current line (counting from 1)."""
+        return self.line_offset + self.input_offset + 1
+
+    def insert_input(self, input_lines, source):
+        self.input_lines.insert(self.line_offset + 1, '',
+                                source='internal padding')
+        self.input_lines.insert(self.line_offset + 1, '',
+                                source='internal padding')
+        self.input_lines.insert(self.line_offset + 2,
+                                StringList(input_lines, source))
+
+    def get_text_block(self, flush_left=0):
+        """
+        Return a contiguous block of text.
+
+        If `flush_left` is true, raise `UnexpectedIndentationError` if an
+        indented line is encountered before the text block ends (with a blank
+        line).
+        """
+        try:
+            block = self.input_lines.get_text_block(self.line_offset,
+                                                    flush_left)
+            self.next_line(len(block) - 1)
+            return block
+        except UnexpectedIndentationError, error:
+            block, source, lineno = error
+            self.next_line(len(block) - 1) # advance to last line of block
+            raise
+
+    def check_line(self, context, state, transitions=None):
+        """
+        Examine one line of input for a transition match & execute its method.
+
+        Parameters:
+
+        - `context`: application-dependent storage.
+        - `state`: a `State` object, the current state.
+        - `transitions`: an optional ordered list of transition names to try,
+          instead of ``state.transition_order``.
+
+        Return the values returned by the transition method:
+
+        - context: possibly modified from the parameter `context`;
+        - next state name (`State` subclass name);
+        - the result output of the transition, a list.
+
+        When there is no match, ``state.no_match()`` is called and its return
+        value is returned.
+        """
+        if transitions is None:
+            transitions =  state.transition_order
+        state_correction = None
+        if self.debug:
+            print >>sys.stderr, (
+                  '\nStateMachine.check_line: state="%s", transitions=%r.'
+                  % (state.__class__.__name__, transitions))
+        for name in transitions:
+            pattern, method, next_state = state.transitions[name]
+            match = self.match(pattern)
+            if match:
+                if self.debug:
+                    print >>sys.stderr, (
+                          '\nStateMachine.check_line: Matched transition '
+                          '"%s" in state "%s".'
+                          % (name, state.__class__.__name__))
+                return method(match, context, next_state)
+        else:
+            if self.debug:
+                print >>sys.stderr, (
+                      '\nStateMachine.check_line: No match in state "%s".'
+                      % state.__class__.__name__)
+            return state.no_match(context, transitions)
+
+    def match(self, pattern):
+        """
+        Return the result of a regular expression match.
+
+        Parameter `pattern`: an `re` compiled regular expression.
+        """
+        return pattern.match(self.line)
+
+    def add_state(self, state_class):
+        """
+        Initialize & add a `state_class` (`State` subclass) object.
+
+        Exception: `DuplicateStateError` raised if `state_class` was already
+        added.
+        """
+        statename = state_class.__name__
+        if self.states.has_key(statename):
+            raise DuplicateStateError(statename)
+        self.states[statename] = state_class(self, self.debug)
+
+    def add_states(self, state_classes):
+        """
+        Add `state_classes` (a list of `State` subclasses).
+        """
+        for state_class in state_classes:
+            self.add_state(state_class)
+
+    def runtime_init(self):
+        """
+        Initialize `self.states`.
+        """
+        for state in self.states.values():
+            state.runtime_init()
+
+    def error(self):
+        """Report error details."""
+        type, value, module, line, function = _exception_data()
+        print >>sys.stderr, '%s: %s' % (type, value)
+        print >>sys.stderr, 'input line %s' % (self.abs_line_number())
+        print >>sys.stderr, ('module %s, line %s, function %s'
+                             % (module, line, function))
+
+    def attach_observer(self, observer):
+        """
+        The `observer` parameter is a function or bound method which takes two
+        arguments, the source and offset of the current line.
+        """
+        self.observers.append(observer)
+
+    def detach_observer(self, observer):
+        self.observers.remove(observer)
+
+    def notify_observers(self):
+        for observer in self.observers:
+            try:
+                info = self.input_lines.info(self.line_offset)
+            except IndexError:
+                info = (None, None)
+            observer(*info)
+
+
+class State:
+
+    """
+    State superclass. Contains a list of transitions, and transition methods.
+
+    Transition methods all have the same signature. They take 3 parameters:
+
+    - An `re` match object. ``match.string`` contains the matched input line,
+      ``match.start()`` gives the start index of the match, and
+      ``match.end()`` gives the end index.
+    - A context object, whose meaning is application-defined (initial value
+      ``None``). It can be used to store any information required by the state
+      machine, and the retured context is passed on to the next transition
+      method unchanged.
+    - The name of the next state, a string, taken from the transitions list;
+      normally it is returned unchanged, but it may be altered by the
+      transition method if necessary.
+
+    Transition methods all return a 3-tuple:
+
+    - A context object, as (potentially) modified by the transition method.
+    - The next state name (a return value of ``None`` means no state change).
+    - The processing result, a list, which is accumulated by the state
+      machine.
+
+    Transition methods may raise an `EOFError` to cut processing short.
+
+    There are two implicit transitions, and corresponding transition methods
+    are defined: `bof()` handles the beginning-of-file, and `eof()` handles
+    the end-of-file. These methods have non-standard signatures and return
+    values. `bof()` returns the initial context and results, and may be used
+    to return a header string, or do any other processing needed. `eof()`
+    should handle any remaining context and wrap things up; it returns the
+    final processing result.
+
+    Typical applications need only subclass `State` (or a subclass), set the
+    `patterns` and `initial_transitions` class attributes, and provide
+    corresponding transition methods. The default object initialization will
+    take care of constructing the list of transitions.
+    """
+
+    patterns = None
+    """
+    {Name: pattern} mapping, used by `make_transition()`. Each pattern may
+    be a string or a compiled `re` pattern. Override in subclasses.
+    """
+
+    initial_transitions = None
+    """
+    A list of transitions to initialize when a `State` is instantiated.
+    Each entry is either a transition name string, or a (transition name, next
+    state name) pair. See `make_transitions()`. Override in subclasses.
+    """
+
+    nested_sm = None
+    """
+    The `StateMachine` class for handling nested processing.
+
+    If left as ``None``, `nested_sm` defaults to the class of the state's
+    controlling state machine. Override it in subclasses to avoid the default.
+    """
+
+    nested_sm_kwargs = None
+    """
+    Keyword arguments dictionary, passed to the `nested_sm` constructor.
+
+    Two keys must have entries in the dictionary:
+
+    - Key 'state_classes' must be set to a list of `State` classes.
+    - Key 'initial_state' must be set to the name of the initial state class.
+
+    If `nested_sm_kwargs` is left as ``None``, 'state_classes' defaults to the
+    class of the current state, and 'initial_state' defaults to the name of
+    the class of the current state. Override in subclasses to avoid the
+    defaults.
+    """
+
+    def __init__(self, state_machine, debug=0):
+        """
+        Initialize a `State` object; make & add initial transitions.
+
+        Parameters:
+
+        - `statemachine`: the controlling `StateMachine` object.
+        - `debug`: a boolean; produce verbose output if true (nonzero).
+        """
+
+        self.transition_order = []
+        """A list of transition names in search order."""
+
+        self.transitions = {}
+        """
+        A mapping of transition names to 3-tuples containing
+        (compiled_pattern, transition_method, next_state_name). Initialized as
+        an instance attribute dynamically (instead of as a class attribute)
+        because it may make forward references to patterns and methods in this
+        or other classes.
+        """
+
+        self.add_initial_transitions()
+
+        self.state_machine = state_machine
+        """A reference to the controlling `StateMachine` object."""
+
+        self.debug = debug
+        """Debugging mode on/off."""
+
+        if self.nested_sm is None:
+            self.nested_sm = self.state_machine.__class__
+        if self.nested_sm_kwargs is None:
+            self.nested_sm_kwargs = {'state_classes': [self.__class__],
+                                     'initial_state': self.__class__.__name__}
+
+    def runtime_init(self):
+        """
+        Initialize this `State` before running the state machine; called from
+        `self.state_machine.run()`.
+        """
+        pass
+
+    def unlink(self):
+        """Remove circular references to objects no longer required."""
+        self.state_machine = None
+
+    def add_initial_transitions(self):
+        """Make and add transitions listed in `self.initial_transitions`."""
+        if self.initial_transitions:
+            names, transitions = self.make_transitions(
+                  self.initial_transitions)
+            self.add_transitions(names, transitions)
+
+    def add_transitions(self, names, transitions):
+        """
+        Add a list of transitions to the start of the transition list.
+
+        Parameters:
+
+        - `names`: a list of transition names.
+        - `transitions`: a mapping of names to transition tuples.
+
+        Exceptions: `DuplicateTransitionError`, `UnknownTransitionError`.
+        """
+        for name in names:
+            if self.transitions.has_key(name):
+                raise DuplicateTransitionError(name)
+            if not transitions.has_key(name):
+                raise UnknownTransitionError(name)
+        self.transition_order[:0] = names
+        self.transitions.update(transitions)
+
+    def add_transition(self, name, transition):
+        """
+        Add a transition to the start of the transition list.
+
+        Parameter `transition`: a ready-made transition 3-tuple.
+
+        Exception: `DuplicateTransitionError`.
+        """
+        if self.transitions.has_key(name):
+            raise DuplicateTransitionError(name)
+        self.transition_order[:0] = [name]
+        self.transitions[name] = transition
+
+    def remove_transition(self, name):
+        """
+        Remove a transition by `name`.
+
+        Exception: `UnknownTransitionError`.
+        """
+        try:
+            del self.transitions[name]
+            self.transition_order.remove(name)
+        except:
+            raise UnknownTransitionError(name)
+
+    def make_transition(self, name, next_state=None):
+        """
+        Make & return a transition tuple based on `name`.
+
+        This is a convenience function to simplify transition creation.
+
+        Parameters:
+
+        - `name`: a string, the name of the transition pattern & method. This
+          `State` object must have a method called '`name`', and a dictionary
+          `self.patterns` containing a key '`name`'.
+        - `next_state`: a string, the name of the next `State` object for this
+          transition. A value of ``None`` (or absent) implies no state change
+          (i.e., continue with the same state).
+
+        Exceptions: `TransitionPatternNotFound`, `TransitionMethodNotFound`.
+        """
+        if next_state is None:
+            next_state = self.__class__.__name__
+        try:
+            pattern = self.patterns[name]
+            if not hasattr(pattern, 'match'):
+                pattern = re.compile(pattern)
+        except KeyError:
+            raise TransitionPatternNotFound(
+                  '%s.patterns[%r]' % (self.__class__.__name__, name))
+        try:
+            method = getattr(self, name)
+        except AttributeError:
+            raise TransitionMethodNotFound(
+                  '%s.%s' % (self.__class__.__name__, name))
+        return (pattern, method, next_state)
+
+    def make_transitions(self, name_list):
+        """
+        Return a list of transition names and a transition mapping.
+
+        Parameter `name_list`: a list, where each entry is either a transition
+        name string, or a 1- or 2-tuple (transition name, optional next state
+        name).
+        """
+        stringtype = type('')
+        names = []
+        transitions = {}
+        for namestate in name_list:
+            if type(namestate) is stringtype:
+                transitions[namestate] = self.make_transition(namestate)
+                names.append(namestate)
+            else:
+                transitions[namestate[0]] = self.make_transition(*namestate)
+                names.append(namestate[0])
+        return names, transitions
+
+    def no_match(self, context, transitions):
+        """
+        Called when there is no match from `StateMachine.check_line()`.
+
+        Return the same values returned by transition methods:
+
+        - context: unchanged;
+        - next state name: ``None``;
+        - empty result list.
+
+        Override in subclasses to catch this event.
+        """
+        return context, None, []
+
+    def bof(self, context):
+        """
+        Handle beginning-of-file. Return unchanged `context`, empty result.
+
+        Override in subclasses.
+
+        Parameter `context`: application-defined storage.
+        """
+        return context, []
+
+    def eof(self, context):
+        """
+        Handle end-of-file. Return empty result.
+
+        Override in subclasses.
+
+        Parameter `context`: application-defined storage.
+        """
+        return []
+
+    def nop(self, match, context, next_state):
+        """
+        A "do nothing" transition method.
+
+        Return unchanged `context` & `next_state`, empty result. Useful for
+        simple state changes (actionless transitions).
+        """
+        return context, next_state, []
+
+
+class StateMachineWS(StateMachine):
+
+    """
+    `StateMachine` subclass specialized for whitespace recognition.
+
+    There are three methods provided for extracting indented text blocks:
+    
+    - `get_indented()`: use when the indent is unknown.
+    - `get_known_indented()`: use when the indent is known for all lines.
+    - `get_first_known_indented()`: use when only the first line's indent is
+      known.
+    """
+
+    def get_indented(self, until_blank=0, strip_indent=1):
+        """
+        Return a block of indented lines of text, and info.
+
+        Extract an indented block where the indent is unknown for all lines.
+
+        :Parameters:
+            - `until_blank`: Stop collecting at the first blank line if true
+              (1).
+            - `strip_indent`: Strip common leading indent if true (1,
+              default).
+
+        :Return:
+            - the indented block (a list of lines of text),
+            - its indent,
+            - its first line offset from BOF, and
+            - whether or not it finished with a blank line.
+        """
+        offset = self.abs_line_offset()
+        indented, indent, blank_finish = self.input_lines.get_indented(
+              self.line_offset, until_blank, strip_indent)
+        if indented:
+            self.next_line(len(indented) - 1) # advance to last indented line
+        while indented and not indented[0].strip():
+            indented.trim_start()
+            offset += 1
+        return indented, indent, offset, blank_finish
+
+    def get_known_indented(self, indent, until_blank=0, strip_indent=1):
+        """
+        Return an indented block and info.
+
+        Extract an indented block where the indent is known for all lines.
+        Starting with the current line, extract the entire text block with at
+        least `indent` indentation (which must be whitespace, except for the
+        first line).
+
+        :Parameters:
+            - `indent`: The number of indent columns/characters.
+            - `until_blank`: Stop collecting at the first blank line if true
+              (1).
+            - `strip_indent`: Strip `indent` characters of indentation if true
+              (1, default).
+
+        :Return:
+            - the indented block,
+            - its first line offset from BOF, and
+            - whether or not it finished with a blank line.
+        """
+        offset = self.abs_line_offset()
+        indented, indent, blank_finish = self.input_lines.get_indented(
+              self.line_offset, until_blank, strip_indent,
+              block_indent=indent)
+        self.next_line(len(indented) - 1) # advance to last indented line
+        while indented and not indented[0].strip():
+            indented.trim_start()
+            offset += 1
+        return indented, offset, blank_finish
+
+    def get_first_known_indented(self, indent, until_blank=0, strip_indent=1,
+                                 strip_top=1):
+        """
+        Return an indented block and info.
+
+        Extract an indented block where the indent is known for the first line
+        and unknown for all other lines.
+
+        :Parameters:
+            - `indent`: The first line's indent (# of columns/characters).
+            - `until_blank`: Stop collecting at the first blank line if true
+              (1).
+            - `strip_indent`: Strip `indent` characters of indentation if true
+              (1, default).
+            - `strip_top`: Strip blank lines from the beginning of the block.
+
+        :Return:
+            - the indented block,
+            - its indent,
+            - its first line offset from BOF, and
+            - whether or not it finished with a blank line.
+        """
+        offset = self.abs_line_offset()
+        indented, indent, blank_finish = self.input_lines.get_indented(
+              self.line_offset, until_blank, strip_indent,
+              first_indent=indent)
+        self.next_line(len(indented) - 1) # advance to last indented line
+        if strip_top:
+            while indented and not indented[0].strip():
+                indented.trim_start()
+                offset += 1
+        return indented, indent, offset, blank_finish
+
+
+class StateWS(State):
+
+    """
+    State superclass specialized for whitespace (blank lines & indents).
+
+    Use this class with `StateMachineWS`.  The transitions 'blank' (for blank
+    lines) and 'indent' (for indented text blocks) are added automatically,
+    before any other transitions.  The transition method `blank()` handles
+    blank lines and `indent()` handles nested indented blocks.  Indented
+    blocks trigger a new state machine to be created by `indent()` and run.
+    The class of the state machine to be created is in `indent_sm`, and the
+    constructor keyword arguments are in the dictionary `indent_sm_kwargs`.
+
+    The methods `known_indent()` and `firstknown_indent()` are provided for
+    indented blocks where the indent (all lines' and first line's only,
+    respectively) is known to the transition method, along with the attributes
+    `known_indent_sm` and `known_indent_sm_kwargs`.  Neither transition method
+    is triggered automatically.
+    """
+
+    indent_sm = None
+    """
+    The `StateMachine` class handling indented text blocks.
+
+    If left as ``None``, `indent_sm` defaults to the value of
+    `State.nested_sm`.  Override it in subclasses to avoid the default.
+    """
+
+    indent_sm_kwargs = None
+    """
+    Keyword arguments dictionary, passed to the `indent_sm` constructor.
+
+    If left as ``None``, `indent_sm_kwargs` defaults to the value of
+    `State.nested_sm_kwargs`. Override it in subclasses to avoid the default.
+    """
+
+    known_indent_sm = None
+    """
+    The `StateMachine` class handling known-indented text blocks.
+
+    If left as ``None``, `known_indent_sm` defaults to the value of
+    `indent_sm`.  Override it in subclasses to avoid the default.
+    """
+
+    known_indent_sm_kwargs = None
+    """
+    Keyword arguments dictionary, passed to the `known_indent_sm` constructor.
+
+    If left as ``None``, `known_indent_sm_kwargs` defaults to the value of
+    `indent_sm_kwargs`. Override it in subclasses to avoid the default.
+    """
+
+    ws_patterns = {'blank': ' *$',
+                   'indent': ' +'}
+    """Patterns for default whitespace transitions.  May be overridden in
+    subclasses."""
+
+    ws_initial_transitions = ('blank', 'indent')
+    """Default initial whitespace transitions, added before those listed in
+    `State.initial_transitions`.  May be overridden in subclasses."""
+
+    def __init__(self, state_machine, debug=0):
+        """
+        Initialize a `StateSM` object; extends `State.__init__()`.
+
+        Check for indent state machine attributes, set defaults if not set.
+        """
+        State.__init__(self, state_machine, debug)
+        if self.indent_sm is None:
+            self.indent_sm = self.nested_sm
+        if self.indent_sm_kwargs is None:
+            self.indent_sm_kwargs = self.nested_sm_kwargs
+        if self.known_indent_sm is None:
+            self.known_indent_sm = self.indent_sm
+        if self.known_indent_sm_kwargs is None:
+            self.known_indent_sm_kwargs = self.indent_sm_kwargs
+
+    def add_initial_transitions(self):
+        """
+        Add whitespace-specific transitions before those defined in subclass.
+
+        Extends `State.add_initial_transitions()`.
+        """
+        State.add_initial_transitions(self)
+        if self.patterns is None:
+            self.patterns = {}
+        self.patterns.update(self.ws_patterns)
+        names, transitions = self.make_transitions(
+            self.ws_initial_transitions)
+        self.add_transitions(names, transitions)
+
+    def blank(self, match, context, next_state):
+        """Handle blank lines. Does nothing. Override in subclasses."""
+        return self.nop(match, context, next_state)
+
+    def indent(self, match, context, next_state):
+        """
+        Handle an indented text block. Extend or override in subclasses.
+
+        Recursively run the registered state machine for indented blocks
+        (`self.indent_sm`).
+        """
+        indented, indent, line_offset, blank_finish = \
+              self.state_machine.get_indented()
+        sm = self.indent_sm(debug=self.debug, **self.indent_sm_kwargs)
+        results = sm.run(indented, input_offset=line_offset)
+        return context, next_state, results
+
+    def known_indent(self, match, context, next_state):
+        """
+        Handle a known-indent text block. Extend or override in subclasses.
+
+        Recursively run the registered state machine for known-indent indented
+        blocks (`self.known_indent_sm`). The indent is the length of the
+        match, ``match.end()``.
+        """
+        indented, line_offset, blank_finish = \
+              self.state_machine.get_known_indented(match.end())
+        sm = self.known_indent_sm(debug=self.debug,
+                                 **self.known_indent_sm_kwargs)
+        results = sm.run(indented, input_offset=line_offset)
+        return context, next_state, results
+
+    def first_known_indent(self, match, context, next_state):
+        """
+        Handle an indented text block (first line's indent known).
+
+        Extend or override in subclasses.
+
+        Recursively run the registered state machine for known-indent indented
+        blocks (`self.known_indent_sm`). The indent is the length of the
+        match, ``match.end()``.
+        """
+        indented, line_offset, blank_finish = \
+              self.state_machine.get_first_known_indented(match.end())
+        sm = self.known_indent_sm(debug=self.debug,
+                                 **self.known_indent_sm_kwargs)
+        results = sm.run(indented, input_offset=line_offset)
+        return context, next_state, results
+
+
+class _SearchOverride:
+
+    """
+    Mix-in class to override `StateMachine` regular expression behavior.
+
+    Changes regular expression matching, from the default `re.match()`
+    (succeeds only if the pattern matches at the start of `self.line`) to
+    `re.search()` (succeeds if the pattern matches anywhere in `self.line`).
+    When subclassing a `StateMachine`, list this class **first** in the
+    inheritance list of the class definition.
+    """
+
+    def match(self, pattern):
+        """
+        Return the result of a regular expression search.
+
+        Overrides `StateMachine.match()`.
+
+        Parameter `pattern`: `re` compiled regular expression.
+        """
+        return pattern.search(self.line)
+
+
+class SearchStateMachine(_SearchOverride, StateMachine):
+    """`StateMachine` which uses `re.search()` instead of `re.match()`."""
+    pass
+
+
+class SearchStateMachineWS(_SearchOverride, StateMachineWS):
+    """`StateMachineWS` which uses `re.search()` instead of `re.match()`."""
+    pass
+
+
+class ViewList:
+
+    """
+    List with extended functionality: slices of ViewList objects are child
+    lists, linked to their parents. Changes made to a child list also affect
+    the parent list.  A child list is effectively a "view" (in the SQL sense)
+    of the parent list.  Changes to parent lists, however, do *not* affect
+    active child lists.  If a parent list is changed, any active child lists
+    should be recreated.
+
+    The start and end of the slice can be trimmed using the `trim_start()` and
+    `trim_end()` methods, without affecting the parent list.  The link between
+    child and parent lists can be broken by calling `disconnect()` on the
+    child list.
+
+    Also, ViewList objects keep track of the source & offset of each item. 
+    This information is accessible via the `source()`, `offset()`, and
+    `info()` methods.
+    """
+
+    def __init__(self, initlist=None, source=None, items=None,
+                 parent=None, parent_offset=None):
+        self.data = []
+        """The actual list of data, flattened from various sources."""
+
+        self.items = []
+        """A list of (source, offset) pairs, same length as `self.data`: the
+        source of each line and the offset of each line from the beginning of
+        its source."""
+
+        self.parent = parent
+        """The parent list."""
+
+        self.parent_offset = parent_offset
+        """Offset of this list from the beginning of the parent list."""
+
+        if isinstance(initlist, ViewList):
+            self.data = initlist.data[:]
+            self.items = initlist.items[:]
+        elif initlist is not None:
+            self.data = list(initlist)
+            if items:
+                self.items = items
+            else:
+                self.items = [(source, i) for i in range(len(initlist))]
+        assert len(self.data) == len(self.items), 'data mismatch'
+
+    def __str__(self):
+        return str(self.data)
+
+    def __repr__(self):
+        return '%s(%s, items=%s)' % (self.__class__.__name__,
+                                     self.data, self.items)
+
+    def __lt__(self, other): return self.data <  self.__cast(other)
+    def __le__(self, other): return self.data <= self.__cast(other)
+    def __eq__(self, other): return self.data == self.__cast(other)
+    def __ne__(self, other): return self.data != self.__cast(other)
+    def __gt__(self, other): return self.data >  self.__cast(other)
+    def __ge__(self, other): return self.data >= self.__cast(other)
+    def __cmp__(self, other): return cmp(self.data, self.__cast(other))
+
+    def __cast(self, other):
+        if isinstance(other, ViewList):
+            return other.data
+        else:
+            return other
+
+    def __contains__(self, item): return item in self.data
+    def __len__(self): return len(self.data)
+
+    # The __getitem__()/__setitem__() methods check whether the index
+    # is a slice first, since native list objects start supporting
+    # them directly in Python 2.3 (no exception is raised when
+    # indexing a list with a slice object; they just work).
+
+    def __getitem__(self, i):
+        if isinstance(i, _SliceType):
+            assert i.step in (None, 1),  'cannot handle slice with stride'
+            return self.__class__(self.data[i.start:i.stop],
+                                  items=self.items[i.start:i.stop],
+                                  parent=self, parent_offset=i.start)
+        else:
+            return self.data[i]
+
+    def __setitem__(self, i, item):
+        if isinstance(i, _SliceType):
+            assert i.step in (None, 1), 'cannot handle slice with stride'
+            if not isinstance(item, ViewList):
+                raise TypeError('assigning non-ViewList to ViewList slice')
+            self.data[i.start:i.stop] = item.data
+            self.items[i.start:i.stop] = item.items
+            assert len(self.data) == len(self.items), 'data mismatch'
+            if self.parent:
+                self.parent[i.start + self.parent_offset
+                            : i.stop + self.parent_offset] = item
+        else:
+            self.data[i] = item
+            if self.parent:
+                self.parent[i + self.parent_offset] = item
+
+    def __delitem__(self, i):
+        try:
+            del self.data[i]
+            del self.items[i]
+            if self.parent:
+                del self.parent[i + self.parent_offset]
+        except TypeError:
+            assert i.step is None, 'cannot handle slice with stride'
+            del self.data[i.start:i.stop]
+            del self.items[i.start:i.stop]
+            if self.parent:
+                del self.parent[i.start + self.parent_offset
+                                : i.stop + self.parent_offset]
+
+    def __add__(self, other):
+        if isinstance(other, ViewList):
+            return self.__class__(self.data + other.data,
+                                  items=(self.items + other.items))
+        else:
+            raise TypeError('adding non-ViewList to a ViewList')
+
+    def __radd__(self, other):
+        if isinstance(other, ViewList):
+            return self.__class__(other.data + self.data,
+                                  items=(other.items + self.items))
+        else:
+            raise TypeError('adding ViewList to a non-ViewList')
+
+    def __iadd__(self, other):
+        if isinstance(other, ViewList):
+            self.data += other.data
+        else:
+            raise TypeError('argument to += must be a ViewList')
+        return self
+
+    def __mul__(self, n):
+        return self.__class__(self.data * n, items=(self.items * n))
+
+    __rmul__ = __mul__
+
+    def __imul__(self, n):
+        self.data *= n
+        self.items *= n
+        return self
+
+    def extend(self, other):
+        if not isinstance(other, ViewList):
+            raise TypeError('extending a ViewList with a non-ViewList')
+        if self.parent:
+            self.parent.insert(len(self.data) + self.parent_offset, other)
+        self.data.extend(other.data)
+        self.items.extend(other.items)
+
+    def append(self, item, source=None, offset=0):
+        if source is None:
+            self.extend(item)
+        else:
+            if self.parent:
+                self.parent.insert(len(self.data) + self.parent_offset, item,
+                                   source, offset)
+            self.data.append(item)
+            self.items.append((source, offset))
+
+    def insert(self, i, item, source=None, offset=0):
+        if source is None:
+            if not isinstance(item, ViewList):
+                raise TypeError('inserting non-ViewList with no source given')
+            self.data[i:i] = item.data
+            self.items[i:i] = item.items
+            if self.parent:
+                index = (len(self.data) + i) % len(self.data)
+                self.parent.insert(index + self.parent_offset, item)
+        else:
+            self.data.insert(i, item)
+            self.items.insert(i, (source, offset))
+            if self.parent:
+                index = (len(self.data) + i) % len(self.data)
+                self.parent.insert(index + self.parent_offset, item,
+                                   source, offset)
+
+    def pop(self, i=-1):
+        if self.parent:
+            index = (len(self.data) + i) % len(self.data)
+            self.parent.pop(index + self.parent_offset)
+        self.items.pop(i)
+        return self.data.pop(i)
+
+    def trim_start(self, n=1):
+        """
+        Remove items from the start of the list, without touching the parent.
+        """
+        if n > len(self.data):
+            raise IndexError("Size of trim too large; can't trim %s items "
+                             "from a list of size %s." % (n, len(self.data)))
+        elif n < 0:
+            raise IndexError('Trim size must be >= 0.')
+        del self.data[:n]
+        del self.items[:n]
+        if self.parent:
+            self.parent_offset += n
+
+    def trim_end(self, n=1):
+        """
+        Remove items from the end of the list, without touching the parent.
+        """
+        if n > len(self.data):
+            raise IndexError("Size of trim too large; can't trim %s items "
+                             "from a list of size %s." % (n, len(self.data)))
+        elif n < 0:
+            raise IndexError('Trim size must be >= 0.')
+        del self.data[-n:]
+        del self.items[-n:]
+
+    def remove(self, item):
+        index = self.index(item)
+        del self[index]
+
+    def count(self, item): return self.data.count(item)
+    def index(self, item): return self.data.index(item)
+
+    def reverse(self):
+        self.data.reverse()
+        self.items.reverse()
+        self.parent = None
+
+    def sort(self, *args):
+        tmp = zip(self.data, self.items)
+        tmp.sort(*args)
+        self.data = [entry[0] for entry in tmp]
+        self.items = [entry[1] for entry in tmp]
+        self.parent = None
+
+    def info(self, i):
+        """Return source & offset for index `i`."""
+        try:
+            return self.items[i]
+        except IndexError:
+            if i == len(self.data):     # Just past the end
+                return self.items[i - 1][0], None
+            else:
+                raise
+
+    def source(self, i):
+        """Return source for index `i`."""
+        return self.info(i)[0]
+
+    def offset(self, i):
+        """Return offset for index `i`."""
+        return self.info(i)[1]
+
+    def disconnect(self):
+        """Break link between this list and parent list."""
+        self.parent = None
+
+
+class StringList(ViewList):
+
+    """A `ViewList` with string-specific methods."""
+
+    def strip_indent(self, length, start=0, end=sys.maxint):
+        """
+        Strip `length` characters off the beginning of each item, in-place,
+        from index `start` to `end`.  No whitespace-checking is done on the
+        stripped text.  Does not affect slice parent.
+        """
+        self.data[start:end] = [line[length:]
+                                for line in self.data[start:end]]
+
+    def get_text_block(self, start, flush_left=0):
+        """
+        Return a contiguous block of text.
+
+        If `flush_left` is true, raise `UnexpectedIndentationError` if an
+        indented line is encountered before the text block ends (with a blank
+        line).
+        """
+        end = start
+        last = len(self.data)
+        while end < last:
+            line = self.data[end]
+            if not line.strip():
+                break
+            if flush_left and (line[0] == ' '):
+                source, offset = self.info(end)
+                raise UnexpectedIndentationError(self[start:end], source,
+                                                 offset + 1)
+            end += 1
+        return self[start:end]
+
+    def get_indented(self, start=0, until_blank=0, strip_indent=1,
+                     block_indent=None, first_indent=None):
+        """
+        Extract and return a StringList of indented lines of text.
+
+        Collect all lines with indentation, determine the minimum indentation,
+        remove the minimum indentation from all indented lines (unless
+        `strip_indent` is false), and return them. All lines up to but not
+        including the first unindented line will be returned.
+
+        :Parameters:
+          - `start`: The index of the first line to examine.
+          - `until_blank`: Stop collecting at the first blank line if true.
+          - `strip_indent`: Strip common leading indent if true (default).
+          - `block_indent`: The indent of the entire block, if known.
+          - `first_indent`: The indent of the first line, if known.
+
+        :Return:
+          - a StringList of indented lines with mininum indent removed;
+          - the amount of the indent;
+          - a boolean: did the indented block finish with a blank line or EOF?
+        """
+        indent = block_indent           # start with None if unknown
+        end = start
+        if block_indent is not None and first_indent is None:
+            first_indent = block_indent
+        if first_indent is not None:
+            end += 1
+        last = len(self.data)
+        while end < last:
+            line = self.data[end]
+            if line and (line[0] != ' '
+                         or (block_indent is not None
+                             and line[:block_indent].strip())):
+                # Line not indented or insufficiently indented.
+                # Block finished properly iff the last indented line blank:
+                blank_finish = ((end > start)
+                                and not self.data[end - 1].strip())
+                break
+            stripped = line.lstrip()
+            if not stripped:            # blank line
+                if until_blank:
+                    blank_finish = 1
+                    break
+            elif block_indent is None:
+                line_indent = len(line) - len(stripped)
+                if indent is None:
+                    indent = line_indent
+                else:
+                    indent = min(indent, line_indent)
+            end += 1
+        else:
+            blank_finish = 1            # block ends at end of lines
+        block = self[start:end]
+        if first_indent is not None and block:
+            block.data[0] = block.data[0][first_indent:]
+        if indent and strip_indent:
+            block.strip_indent(indent, start=(first_indent is not None))
+        return block, indent or 0, blank_finish
+
+
+class StateMachineError(Exception): pass
+class UnknownStateError(StateMachineError): pass
+class DuplicateStateError(StateMachineError): pass
+class UnknownTransitionError(StateMachineError): pass
+class DuplicateTransitionError(StateMachineError): pass
+class TransitionPatternNotFound(StateMachineError): pass
+class TransitionMethodNotFound(StateMachineError): pass
+class UnexpectedIndentationError(StateMachineError): pass
+
+
+class TransitionCorrection(Exception):
+
+    """
+    Raise from within a transition method to switch to another transition.
+
+    Raise with one argument, the new transition name.
+    """
+
+
+class StateCorrection(Exception):
+
+    """
+    Raise from within a transition method to switch to another state.
+
+    Raise with one or two arguments: new state name, and an optional new
+    transition name.
+    """
+
+
+def string2lines(astring, tab_width=8, convert_whitespace=0,
+                 whitespace=re.compile('[\v\f]')):
+    """
+    Return a list of one-line strings with tabs expanded and no newlines.
+
+    Each tab is expanded with between 1 and `tab_width` spaces, so that the
+    next character's index becomes a multiple of `tab_width` (8 by default).
+
+    Parameters:
+
+    - `astring`: a multi-line string.
+    - `tab_width`: the number of columns between tab stops.
+    - `convert_whitespace`: convert form feeds and vertical tabs to spaces?
+    """
+    if convert_whitespace:
+        astring = whitespace.sub(' ', astring)
+    return [s.expandtabs(tab_width) for s in astring.splitlines()]
+
+def _exception_data():
+    """
+    Return exception information:
+
+    - the exception's class name;
+    - the exception object;
+    - the name of the file containing the offending code;
+    - the line number of the offending code;
+    - the function name of the offending code.
+    """
+    type, value, traceback = sys.exc_info()
+    while traceback.tb_next:
+        traceback = traceback.tb_next
+    code = traceback.tb_frame.f_code
+    return (type.__name__, value, code.co_filename, traceback.tb_lineno,
+            code.co_name)

Added: trunk/www/utils/helpers/docutils/docutils/transforms/__init__.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/transforms/__init__.py    
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/transforms/__init__.py    
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,166 @@
+# Authors: David Goodger, Ueli Schlaepfer
+# Contact: address@hidden
+# Revision: $Revision: 1.8 $
+# Date: $Date: 2002/10/24 00:50:16 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+This package contains modules for standard tree transforms available
+to Docutils components. Tree transforms serve a variety of purposes:
+
+- To tie up certain syntax-specific "loose ends" that remain after the
+  initial parsing of the input plaintext. These transforms are used to
+  supplement a limited syntax.
+
+- To automate the internal linking of the document tree (hyperlink
+  references, footnote references, etc.).
+
+- To extract useful information from the document tree. These
+  transforms may be used to construct (for example) indexes and tables
+  of contents.
+
+Each transform is an optional step that a Docutils Reader may choose to
+perform on the parsed document, depending on the input context. A Docutils
+Reader may also perform Reader-specific transforms before or after performing
+these standard transforms.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+from docutils import languages, ApplicationError, TransformSpec
+
+
+class TransformError(ApplicationError): pass
+
+
+class Transform:
+
+    """
+    Docutils transform component abstract base class.
+    """
+
+    default_priority = None
+    """Numerical priority of this transform, 0 through 999 (override)."""
+
+    def __init__(self, document, startnode=None):
+        """
+        Initial setup for in-place document transforms.
+        """
+
+        self.document = document
+        """The document tree to transform."""
+
+        self.startnode = startnode
+        """Node from which to begin the transform.  For many transforms which
+        apply to the document as a whole, `startnode` is not set (i.e. its
+        value is `None`)."""
+
+        self.language = languages.get_language(
+            document.settings.language_code)
+        """Language module local to this document."""
+
+    def apply(self):
+        """Override to apply the transform to the document tree."""
+        raise NotImplementedError('subclass must override this method')
+
+
+class Transformer(TransformSpec):
+
+    """
+    Stores transforms (`Transform` classes) and applies them to document
+    trees.  Also keeps track of components by component type name.
+    """
+
+    from docutils.transforms import universal
+
+    default_transforms = (universal.Decorations,
+                          universal.FinalChecks,
+                          universal.Messages)
+    """These transforms are applied to all document trees."""
+
+    def __init__(self, document):
+        self.transforms = []
+        """List of transforms to apply.  Each item is a 3-tuple:
+        ``(priority string, transform class, pending node or None)``."""
+
+        self.document = document
+        """The `nodes.document` object this Transformer is attached to."""
+
+        self.applied = []
+        """Transforms already applied, in order."""
+
+        self.sorted = 0
+        """Boolean: is `self.tranforms` sorted?"""
+
+        self.components = {}
+        """Mapping of component type name to component object.  Set by
+        `self.populate_from_components()`."""
+
+        self.serialno = 0
+        """Internal serial number to keep track of the add order of
+        transforms."""
+
+    def add_transform(self, transform_class, priority=None):
+        """
+        Store a single transform.  Use `priority` to override the default.
+        """
+        if priority is None:
+            priority = transform_class.default_priority
+        priority_string = self.get_priority_string(priority)
+        self.transforms.append((priority_string, transform_class, None))
+        self.sorted = 0
+
+    def add_transforms(self, transform_list):
+        """Store multiple transforms, with default priorities."""
+        for transform_class in transform_list:
+            priority_string = self.get_priority_string(
+                transform_class.default_priority)
+            self.transforms.append((priority_string, transform_class, None))
+        self.sorted = 0
+
+    def add_pending(self, pending, priority=None):
+        """Store a transform with an associated `pending` node."""
+        transform_class = pending.transform
+        if priority is None:
+            priority = transform_class.default_priority
+        priority_string = self.get_priority_string(priority)
+        self.transforms.append((priority_string, transform_class, pending))
+        self.sorted = 0
+
+    def get_priority_string(self, priority):
+        """
+        Return a string, `priority` combined with `self.serialno`.
+
+        This ensures FIFO order on transforms with identical priority.
+        """
+        self.serialno += 1
+        return '%03d-%03d' % (priority, self.serialno)
+
+    def populate_from_components(self, components):
+        """
+        Store each component's default transforms, with default priorities.
+        Also, store components by type name in a mapping for later lookup.
+        """
+        self.add_transforms(self.default_transforms)
+        for component in components:
+            if component is None:
+                continue
+            self.add_transforms(component.default_transforms)
+            self.components[component.component_type] = component
+        self.sorted = 0
+
+    def apply_transforms(self):
+        """Apply all of the stored transforms, in priority order."""
+        self.document.reporter.attach_observer(
+            self.document.note_transform_message)
+        while self.transforms:
+            if not self.sorted:
+                # Unsorted initially, and whenever a transform is added.
+                self.transforms.sort()
+                self.transforms.reverse()
+                self.sorted = 1
+            priority, transform_class, pending = self.transforms.pop()
+            transform = transform_class(self.document, startnode=pending)
+            transform.apply()
+            self.applied.append((priority, transform_class, pending))

Added: trunk/www/utils/helpers/docutils/docutils/transforms/components.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/transforms/components.py  
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/transforms/components.py  
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,54 @@
+# Author: David Goodger
+# Contact: address@hidden
+# Revision: $Revision: 1.8 $
+# Date: $Date: 2002/10/24 00:50:34 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+Docutils component-related transforms.
+"""
+
+__docformat__ = 'reStructuredText'
+
+import sys
+import os
+import re
+import time
+from docutils import nodes, utils
+from docutils import ApplicationError, DataError
+from docutils.transforms import Transform, TransformError
+
+
+class Filter(Transform):
+
+    """
+    Include or exclude elements which depend on a specific Docutils component.
+
+    For use with `nodes.pending` elements.  A "pending" element's dictionary
+    attribute ``details`` must contain the keys "component" and "format".  The
+    value of ``details['component']`` must match the type name of the
+    component the elements depend on (e.g. "writer").  The value of
+    ``details['format']`` is the name of a specific format or context of that
+    component (e.g. "html").  If the matching Docutils component supports that
+    format or context, the "pending" element is replaced by the contents of
+    ``details['nodes']`` (a list of nodes); otherwise, the "pending" element
+    is removed.
+
+    For example, the reStructuredText "meta" directive creates a "pending"
+    element containing a "meta" element (in ``pending.details['nodes']``).
+    Only writers (``pending.details['component'] == 'writer'``) supporting the
+    "html" format (``pending.details['format'] == 'html'``) will include the
+    "meta" element; it will be deleted from the output of all other writers.
+    """
+
+    default_priority = 780
+
+    def apply(self):
+        pending = self.startnode
+        component_type = pending.details['component'] # 'reader' or 'writer'
+        format = pending.details['format']
+        component = self.document.transformer.components[component_type]
+        if component.supports(format):
+            pending.parent.replace(pending, pending.details['nodes'])
+        else:
+            pending.parent.remove(pending)

Added: trunk/www/utils/helpers/docutils/docutils/transforms/frontmatter.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/transforms/frontmatter.py 
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/transforms/frontmatter.py 
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,399 @@
+# Authors: David Goodger, Ueli Schlaepfer
+# Contact: address@hidden
+# Revision: $Revision: 1.14 $
+# Date: $Date: 2003/05/29 15:17:58 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+Transforms related to the front matter of a document (information
+found before the main text):
+
+- `DocTitle`: Used to transform a lone top level section's title to
+  the document title, and promote a remaining lone top-level section's
+  title to the document subtitle.
+
+- `DocInfo`: Used to transform a bibliographic field list into docinfo
+  elements.
+"""
+
+__docformat__ = 'reStructuredText'
+
+import re
+from docutils import nodes, utils
+from docutils.transforms import TransformError, Transform
+
+
+class DocTitle(Transform):
+
+    """
+    In reStructuredText_, there is no way to specify a document title
+    and subtitle explicitly. Instead, we can supply the document title
+    (and possibly the subtitle as well) implicitly, and use this
+    two-step transform to "raise" or "promote" the title(s) (and their
+    corresponding section contents) to the document level.
+
+    1. If the document contains a single top-level section as its
+       first non-comment element, the top-level section's title
+       becomes the document's title, and the top-level section's
+       contents become the document's immediate contents. The lone
+       top-level section header must be the first non-comment element
+       in the document.
+
+       For example, take this input text::
+
+           =================
+            Top-Level Title
+           =================
+
+           A paragraph.
+
+       Once parsed, it looks like this::
+
+           <document>
+               <section name="top-level title">
+                   <title>
+                       Top-Level Title
+                   <paragraph>
+                       A paragraph.
+
+       After running the DocTitle transform, we have::
+
+           <document name="top-level title">
+               <title>
+                   Top-Level Title
+               <paragraph>
+                   A paragraph.
+
+    2. If step 1 successfully determines the document title, we
+       continue by checking for a subtitle.
+
+       If the lone top-level section itself contains a single
+       second-level section as its first non-comment element, that
+       section's title is promoted to the document's subtitle, and
+       that section's contents become the document's immediate
+       contents. Given this input text::
+
+           =================
+            Top-Level Title
+           =================
+
+           Second-Level Title
+           ~~~~~~~~~~~~~~~~~~
+
+           A paragraph.
+
+       After parsing and running the Section Promotion transform, the
+       result is::
+
+           <document name="top-level title">
+               <title>
+                   Top-Level Title
+               <subtitle name="second-level title">
+                   Second-Level Title
+               <paragraph>
+                   A paragraph.
+
+       (Note that the implicit hyperlink target generated by the
+       "Second-Level Title" is preserved on the "subtitle" element
+       itself.)
+
+    Any comment elements occurring before the document title or
+    subtitle are accumulated and inserted as the first body elements
+    after the title(s).
+    """
+
+    default_priority = 320
+
+    def apply(self):
+        if not getattr(self.document.settings, 'doctitle_xform', 1):
+            return
+        if self.promote_document_title():
+            self.promote_document_subtitle()
+
+    def promote_document_title(self):
+        section, index = self.candidate_index()
+        if index is None:
+            return None
+        document = self.document
+        # Transfer the section's attributes to the document element (at root):
+        document.attributes.update(section.attributes)
+        document[:] = (section[:1]        # section title
+                       + document[:index] # everything that was in the
+                                          # document before the section
+                       + section[1:])     # everything that was in the section
+        return 1
+
+    def promote_document_subtitle(self):
+        subsection, index = self.candidate_index()
+        if index is None:
+            return None
+        subtitle = nodes.subtitle()
+        # Transfer the subsection's attributes to the new subtitle:
+        subtitle.attributes.update(subsection.attributes)
+        # Transfer the contents of the subsection's title to the subtitle:
+        subtitle[:] = subsection[0][:]
+        document = self.document
+        document[:] = (document[:1]       # document title
+                       + [subtitle]
+                       # everything that was before the section:
+                       + document[1:index]
+                       # everything that was in the subsection:
+                       + subsection[1:])
+        return 1
+
+    def candidate_index(self):
+        """
+        Find and return the promotion candidate and its index.
+
+        Return (None, None) if no valid candidate was found.
+        """
+        document = self.document
+        index = document.first_child_not_matching_class(
+              nodes.PreBibliographic)
+        if index is None or len(document) > (index + 1) or \
+              not isinstance(document[index], nodes.section):
+            return None, None
+        else:
+            return document[index], index
+
+
+class DocInfo(Transform):
+
+    """
+    This transform is specific to the reStructuredText_ markup syntax;
+    see "Bibliographic Fields" in the `reStructuredText Markup
+    Specification`_ for a high-level description. This transform
+    should be run *after* the `DocTitle` transform.
+
+    Given a field list as the first non-comment element after the
+    document title and subtitle (if present), registered bibliographic
+    field names are transformed to the corresponding DTD elements,
+    becoming child elements of the "docinfo" element (except for a
+    dedication and/or an abstract, which become "topic" elements after
+    "docinfo").
+
+    For example, given this document fragment after parsing::
+
+        <document>
+            <title>
+                Document Title
+            <field_list>
+                <field>
+                    <field_name>
+                        Author
+                    <field_body>
+                        <paragraph>
+                            A. Name
+                <field>
+                    <field_name>
+                        Status
+                    <field_body>
+                        <paragraph>
+                            $RCSfile: frontmatter.py,v $
+            ...
+
+    After running the bibliographic field list transform, the
+    resulting document tree would look like this::
+
+        <document>
+            <title>
+                Document Title
+            <docinfo>
+                <author>
+                    A. Name
+                <status>
+                    frontmatter.py
+            ...
+
+    The "Status" field contained an expanded RCS keyword, which is
+    normally (but optionally) cleaned up by the transform. The sole
+    contents of the field body must be a paragraph containing an
+    expanded RCS keyword of the form "$keyword: expansion text $". Any
+    RCS keyword can be processed in any bibliographic field. The
+    dollar signs and leading RCS keyword name are removed. Extra
+    processing is done for the following RCS keywords:
+
+    - "RCSfile" expands to the name of the file in the RCS or CVS
+      repository, which is the name of the source file with a ",v"
+      suffix appended. The transform will remove the ",v" suffix.
+
+    - "Date" expands to the format "YYYY/MM/DD hh:mm:ss" (in the UTC
+      time zone). The RCS Keywords transform will extract just the
+      date itself and transform it to an ISO 8601 format date, as in
+      "2000-12-31".
+
+      (Since the source file for this text is itself stored under CVS,
+      we can't show an example of the "Date" RCS keyword because we
+      can't prevent any RCS keywords used in this explanation from
+      being expanded. Only the "RCSfile" keyword is stable; its
+      expansion text changes only if the file name changes.)
+    """
+
+    default_priority = 340
+
+    biblio_nodes = {
+          'author': nodes.author,
+          'authors': nodes.authors,
+          'organization': nodes.organization,
+          'address': nodes.address,
+          'contact': nodes.contact,
+          'version': nodes.version,
+          'revision': nodes.revision,
+          'status': nodes.status,
+          'date': nodes.date,
+          'copyright': nodes.copyright,
+          'dedication': nodes.topic,
+          'abstract': nodes.topic}
+    """Canonical field name (lowcased) to node class name mapping for
+    bibliographic fields (field_list)."""
+
+    def apply(self):
+        if not getattr(self.document.settings, 'docinfo_xform', 1):
+            return
+        document = self.document
+        index = document.first_child_not_matching_class(
+              nodes.PreBibliographic)
+        if index is None:
+            return
+        candidate = document[index]
+        if isinstance(candidate, nodes.field_list):
+            biblioindex = document.first_child_not_matching_class(
+                  nodes.Titular)
+            nodelist = self.extract_bibliographic(candidate)
+            del document[index]         # untransformed field list (candidate)
+            document[biblioindex:biblioindex] = nodelist
+        return
+
+    def extract_bibliographic(self, field_list):
+        docinfo = nodes.docinfo()
+        bibliofields = self.language.bibliographic_fields
+        labels = self.language.labels
+        topics = {'dedication': None, 'abstract': None}
+        for field in field_list:
+            try:
+                name = field[0][0].astext()
+                normedname = nodes.fully_normalize_name(name)
+                if not (len(field) == 2 and bibliofields.has_key(normedname)
+                        and self.check_empty_biblio_field(field, name)):
+                    raise TransformError
+                canonical = bibliofields[normedname]
+                biblioclass = self.biblio_nodes[canonical]
+                if issubclass(biblioclass, nodes.TextElement):
+                    if not self.check_compound_biblio_field(field, name):
+                        raise TransformError
+                    utils.clean_rcs_keywords(
+                          field[1][0], self.rcs_keyword_substitutions)
+                    docinfo.append(biblioclass('', '', *field[1][0]))
+                elif issubclass(biblioclass, nodes.authors):
+                    self.extract_authors(field, name, docinfo)
+                elif issubclass(biblioclass, nodes.topic):
+                    if topics[canonical]:
+                        field[-1] += self.document.reporter.warning(
+                            'There can only be one "%s" field.' % name,
+                            base_node=field)
+                        raise TransformError
+                    title = nodes.title(name, labels[canonical])
+                    topics[canonical] = biblioclass(
+                        '', title, CLASS=canonical, *field[1].children)
+                else:
+                    docinfo.append(biblioclass('', *field[1].children))
+            except TransformError:
+                if len(field[-1]) == 1 \
+                       and isinstance(field[-1][0], nodes.paragraph):
+                    utils.clean_rcs_keywords(
+                        field[-1][0], self.rcs_keyword_substitutions)
+                docinfo.append(field)
+        nodelist = []
+        if len(docinfo) != 0:
+            nodelist.append(docinfo)
+        for name in ('dedication', 'abstract'):
+            if topics[name]:
+                nodelist.append(topics[name])
+        return nodelist
+
+    def check_empty_biblio_field(self, field, name):
+        if len(field[-1]) < 1:
+            field[-1] += self.document.reporter.warning(
+                  'Cannot extract empty bibliographic field "%s".' % name,
+                  base_node=field)
+            return None
+        return 1
+
+    def check_compound_biblio_field(self, field, name):
+        if len(field[-1]) > 1:
+            field[-1] += self.document.reporter.warning(
+                  'Cannot extract compound bibliographic field "%s".' % name,
+                  base_node=field)
+            return None
+        if not isinstance(field[-1][0], nodes.paragraph):
+            field[-1] += self.document.reporter.warning(
+                  'Cannot extract bibliographic field "%s" containing '
+                  'anything other than a single paragraph.' % name,
+                  base_node=field)
+            return None
+        return 1
+
+    rcs_keyword_substitutions = [
+          (re.compile(r'\$' r'Date: (\d\d\d\d)/(\d\d)/(\d\d) [\d:]+ \$$',
+                      re.IGNORECASE), r'\1-\2-\3'),
+          (re.compile(r'\$' r'RCSfile: (.+),v \$$', re.IGNORECASE), r'\1'),
+          (re.compile(r'\$[a-zA-Z]+: (.+) \$$'), r'\1'),]
+
+    def extract_authors(self, field, name, docinfo):
+        try:
+            if len(field[1]) == 1:
+                if isinstance(field[1][0], nodes.paragraph):
+                    authors = self.authors_from_one_paragraph(field)
+                elif isinstance(field[1][0], nodes.bullet_list):
+                    authors = self.authors_from_bullet_list(field)
+                else:
+                    raise TransformError
+            else:
+                authors = self.authors_from_paragraphs(field)
+            authornodes = [nodes.author('', '', *author)
+                           for author in authors if author]
+            if len(authornodes) > 1:
+                docinfo.append(nodes.authors('', *authornodes))
+            elif len(authornodes) == 1:
+                docinfo.append(authornodes[0])
+            else:
+                raise TransformError
+        except TransformError:
+            field[-1] += self.document.reporter.warning(
+                  'Bibliographic field "%s" incompatible with extraction: '
+                  'it must contain either a single paragraph (with authors '
+                  'separated by one of "%s"), multiple paragraphs (one per '
+                  'author), or a bullet list with one paragraph (one author) '
+                  'per item.'
+                  % (name, ''.join(self.language.author_separators)),
+                  base_node=field)
+            raise
+
+    def authors_from_one_paragraph(self, field):
+        text = field[1][0].astext().strip()
+        if not text:
+            raise TransformError
+        for authorsep in self.language.author_separators:
+            authornames = text.split(authorsep)
+            if len(authornames) > 1:
+                break
+        authornames = [author.strip() for author in authornames]
+        authors = [[nodes.Text(author)] for author in authornames if author]
+        return authors
+
+    def authors_from_bullet_list(self, field):
+        authors = []
+        for item in field[1][0]:
+            if len(item) != 1 or not isinstance(item[0], nodes.paragraph):
+                raise TransformError
+            authors.append(item[0].children)
+        if not authors:
+            raise TransformError
+        return authors
+
+    def authors_from_paragraphs(self, field):
+        for item in field[1]:
+            if not isinstance(item, nodes.paragraph):
+                raise TransformError
+        authors = [item.children for item in field[1]]
+        return authors

Added: trunk/www/utils/helpers/docutils/docutils/transforms/misc.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/transforms/misc.py        
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/transforms/misc.py        
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,62 @@
+# Author: David Goodger
+# Contact: address@hidden
+# Revision: $Revision: 1.2 $
+# Date: $Date: 2003/05/24 20:48:43 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+Miscellaneous transforms.
+"""
+
+__docformat__ = 'reStructuredText'
+
+from docutils import nodes
+from docutils.transforms import Transform, TransformError
+
+
+class CallBack(Transform):
+
+    """
+    Inserts a callback into a document.  The callback is called when the
+    transform is applied, which is determined by its priority.
+
+    For use with `nodes.pending` elements.  Requires a ``details['callback']``
+    entry, a bound method or function which takes one parameter: the pending
+    node.  Other data can be stored in the ``details`` attribute or in the
+    object hosting the callback method.
+    """
+
+    default_priority = 990
+
+    def apply(self):
+        pending = self.startnode
+        pending.details['callback'](pending)
+        pending.parent.remove(pending)
+
+
+class ClassAttribute(Transform):
+
+    default_priority = 210
+
+    def apply(self):
+        pending = self.startnode
+        class_value = pending.details['class']
+        parent = pending.parent
+        child = pending
+        while parent:
+            for index in range(parent.index(child) + 1, len(parent)):
+                element = parent[index]
+                if isinstance(element, nodes.comment):
+                    continue
+                element.set_class(class_value)
+                pending.parent.remove(pending)
+                return
+            else:
+                child = parent
+                parent = parent.parent
+        error = self.document.reporter.error(
+            'No suitable element following "%s" directive'
+            % pending.details['directive'],
+            nodes.literal_block(pending.rawsource, pending.rawsource),
+            line=pending.line)
+        pending.parent.replace(pending, error)

Added: trunk/www/utils/helpers/docutils/docutils/transforms/parts.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/transforms/parts.py       
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/transforms/parts.py       
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,176 @@
+# Authors: David Goodger, Ueli Schlaepfer, Dmitry Jemerov
+# Contact: address@hidden
+# Revision: $Revision: 1.14 $
+# Date: $Date: 2003/05/24 20:48:58 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+Transforms related to document parts.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+import re
+import sys
+from docutils import nodes, utils
+from docutils.transforms import TransformError, Transform
+
+
+class SectNum(Transform):
+
+    """
+    Automatically assigns numbers to the titles of document sections.
+
+    It is possible to limit the maximum section level for which the numbers
+    are added.  For those sections that are auto-numbered, the "autonum"
+    attribute is set, informing the contents table generator that a different
+    form of the TOC should be used.
+    """
+
+    default_priority = 710
+    """Should be applied before `Contents`."""
+
+    def apply(self):
+        self.maxdepth = self.startnode.details.get('depth', sys.maxint)
+        self.startnode.parent.remove(self.startnode)
+        self.update_section_numbers(self.document)
+
+    def update_section_numbers(self, node, prefix=(), depth=0):
+        depth += 1
+        sectnum = 1
+        for child in node:
+            if isinstance(child, nodes.section):
+                numbers = prefix + (str(sectnum),)
+                title = child[0]
+                # Use &nbsp; for spacing:
+                generated = nodes.generated(
+                    '', '.'.join(numbers) + u'\u00a0' * 3, CLASS='sectnum')
+                title.insert(0, generated)
+                title['auto'] = 1
+                if depth < self.maxdepth:
+                    self.update_section_numbers(child, numbers, depth)
+                sectnum += 1
+
+
+class Contents(Transform):
+
+    """
+    This transform generates a table of contents from the entire document tree
+    or from a single branch.  It locates "section" elements and builds them
+    into a nested bullet list, which is placed within a "topic".  A title is
+    either explicitly specified, taken from the appropriate language module,
+    or omitted (local table of contents).  The depth may be specified.
+    Two-way references between the table of contents and section titles are
+    generated (requires Writer support).
+
+    This transform requires a startnode, which which contains generation
+    options and provides the location for the generated table of contents (the
+    startnode is replaced by the table of contents "topic").
+    """
+
+    default_priority = 720
+
+    def apply(self):
+        topic = nodes.topic(CLASS='contents')
+        details = self.startnode.details
+        if details.has_key('class'):
+            topic.set_class(details['class'])
+        title = details['title']
+        if details.has_key('local'):
+            startnode = self.startnode.parent
+            # @@@ generate an error if the startnode (directive) not at
+            # section/document top-level? Drag it up until it is?
+            while not isinstance(startnode, nodes.Structural):
+                startnode = startnode.parent
+        else:
+            startnode = self.document
+            if not title:
+                title = nodes.title('', self.language.labels['contents'])
+        if title:
+            name = title.astext()
+            topic += title
+        else:
+            name = self.language.labels['contents']
+        name = nodes.fully_normalize_name(name)
+        if not self.document.has_name(name):
+            topic['name'] = name
+        self.document.note_implicit_target(topic)
+        self.toc_id = topic['id']
+        if details.has_key('backlinks'):
+            self.backlinks = details['backlinks']
+        else:
+            self.backlinks = self.document.settings.toc_backlinks
+        contents = self.build_contents(startnode)
+        if len(contents):
+            topic += contents
+            self.startnode.parent.replace(self.startnode, topic)
+        else:
+            self.startnode.parent.remove(self.startnode)
+
+    def build_contents(self, node, level=0):
+        level += 1
+        sections = []
+        i = len(node) - 1
+        while i >= 0 and isinstance(node[i], nodes.section):
+            sections.append(node[i])
+            i -= 1
+        sections.reverse()
+        entries = []
+        autonum = 0
+        depth = self.startnode.details.get('depth', sys.maxint)
+        for section in sections:
+            title = section[0]
+            auto = title.get('auto')    # May be set by SectNum.
+            entrytext = self.copy_and_filter(title)
+            reference = nodes.reference('', '', refid=section['id'],
+                                        *entrytext)
+            ref_id = self.document.set_id(reference)
+            entry = nodes.paragraph('', '', reference)
+            item = nodes.list_item('', entry)
+            if self.backlinks == 'entry':
+                title['refid'] = ref_id
+            elif self.backlinks == 'top':
+                title['refid'] = self.toc_id
+            if level < depth:
+                subsects = self.build_contents(section, level)
+                item += subsects
+            entries.append(item)
+        if entries:
+            contents = nodes.bullet_list('', *entries)
+            if auto:
+                contents.set_class('auto-toc')
+            return contents
+        else:
+            return []
+
+    def copy_and_filter(self, node):
+        """Return a copy of a title, with references, images, etc. removed."""
+        visitor = ContentsFilter(self.document)
+        node.walkabout(visitor)
+        return visitor.get_entry_text()
+
+
+class ContentsFilter(nodes.TreeCopyVisitor):
+
+    def get_entry_text(self):
+        return self.get_tree_copy().get_children()
+
+    def visit_citation_reference(self, node):
+        raise nodes.SkipNode
+
+    def visit_footnote_reference(self, node):
+        raise nodes.SkipNode
+
+    def visit_image(self, node):
+        if node.hasattr('alt'):
+            self.parent.append(nodes.Text(node['alt']))
+        raise nodes.SkipNode
+
+    def ignore_node_but_process_children(self, node):
+        raise nodes.SkipDeparture
+
+    visit_interpreted = ignore_node_but_process_children
+    visit_problematic = ignore_node_but_process_children
+    visit_reference = ignore_node_but_process_children
+    visit_target = ignore_node_but_process_children

Added: trunk/www/utils/helpers/docutils/docutils/transforms/peps.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/transforms/peps.py        
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/transforms/peps.py        
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,294 @@
+# Author: David Goodger
+# Contact: address@hidden
+# Revision: $Revision: 1.21 $
+# Date: $Date: 2002/12/07 03:05:29 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+Transforms for PEP processing.
+
+- `Headers`: Used to transform a PEP's initial RFC-2822 header.  It remains a
+  field list, but some entries get processed.
+- `Contents`: Auto-inserts a table of contents.
+- `PEPZero`: Special processing for PEP 0.
+"""
+
+__docformat__ = 'reStructuredText'
+
+import sys
+import os
+import re
+import time
+from docutils import nodes, utils
+from docutils import ApplicationError, DataError
+from docutils.transforms import Transform, TransformError
+from docutils.transforms import parts, references, misc
+
+
+class Headers(Transform):
+
+    """
+    Process fields in a PEP's initial RFC-2822 header.
+    """
+
+    default_priority = 360
+
+    pep_url = 'pep-%04d.html'
+    pep_cvs_url = ('http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/python/'
+                   'python/nondist/peps/pep-%04d.txt')
+    rcs_keyword_substitutions = (
+          (re.compile(r'\$' r'RCSfile: (.+),v \$$', re.IGNORECASE), r'\1'),
+          (re.compile(r'\$[a-zA-Z]+: (.+) \$$'), r'\1'),)
+
+    def apply(self):
+        if not len(self.document):
+            # @@@ replace these DataErrors with proper system messages
+            raise DataError('Document tree is empty.')
+        header = self.document[0]
+        if not isinstance(header, nodes.field_list) or \
+              header.get('class') != 'rfc2822':
+            raise DataError('Document does not begin with an RFC-2822 '
+                            'header; it is not a PEP.')
+        pep = None
+        for field in header:
+            if field[0].astext().lower() == 'pep': # should be the first field
+                value = field[1].astext()
+                try:
+                    pep = int(value)
+                    cvs_url = self.pep_cvs_url % pep
+                except ValueError:
+                    pep = value
+                    cvs_url = None
+                    msg = self.document.reporter.warning(
+                        '"PEP" header must contain an integer; "%s" is an '
+                        'invalid value.' % pep, base_node=field)
+                    msgid = self.document.set_id(msg)
+                    prb = nodes.problematic(value, value or '(none)',
+                                            refid=msgid)
+                    prbid = self.document.set_id(prb)
+                    msg.add_backref(prbid)
+                    if len(field[1]):
+                        field[1][0][:] = [prb]
+                    else:
+                        field[1] += nodes.paragraph('', '', prb)
+                break
+        if pep is None:
+            raise DataError('Document does not contain an RFC-2822 "PEP" '
+                            'header.')
+        if pep == 0:
+            # Special processing for PEP 0.
+            pending = nodes.pending(PEPZero)
+            self.document.insert(1, pending)
+            self.document.note_pending(pending)
+        if len(header) < 2 or header[1][0].astext().lower() != 'title':
+            raise DataError('No title!')
+        for field in header:
+            name = field[0].astext().lower()
+            body = field[1]
+            if len(body) > 1:
+                raise DataError('PEP header field body contains multiple '
+                                'elements:\n%s' % field.pformat(level=1))
+            elif len(body) == 1:
+                if not isinstance(body[0], nodes.paragraph):
+                    raise DataError('PEP header field body may only contain '
+                                    'a single paragraph:\n%s'
+                                    % field.pformat(level=1))
+            elif name == 'last-modified':
+                date = time.strftime(
+                      '%d-%b-%Y',
+                      time.localtime(os.stat(self.document['source'])[8]))
+                if cvs_url:
+                    body += nodes.paragraph(
+                        '', '', nodes.reference('', date, refuri=cvs_url))
+            else:
+                # empty
+                continue
+            para = body[0]
+            if name == 'author':
+                for node in para:
+                    if isinstance(node, nodes.reference):
+                        node.parent.replace(node, mask_email(node))
+            elif name == 'discussions-to':
+                for node in para:
+                    if isinstance(node, nodes.reference):
+                        node.parent.replace(node, mask_email(node, pep))
+            elif name in ('replaces', 'replaced-by', 'requires'):
+                newbody = []
+                space = nodes.Text(' ')
+                for refpep in re.split(',?\s+', body.astext()):
+                    pepno = int(refpep)
+                    newbody.append(nodes.reference(
+                          refpep, refpep, refuri=self.pep_url % pepno))
+                    newbody.append(space)
+                para[:] = newbody[:-1] # drop trailing space
+            elif name == 'last-modified':
+                utils.clean_rcs_keywords(para, self.rcs_keyword_substitutions)
+                if cvs_url:
+                    date = para.astext()
+                    para[:] = [nodes.reference('', date, refuri=cvs_url)]
+            elif name == 'content-type':
+                pep_type = para.astext()
+                uri = self.pep_url % 12
+                para[:] = [nodes.reference('', pep_type, refuri=uri)]
+            elif name == 'version' and len(body):
+                utils.clean_rcs_keywords(para, self.rcs_keyword_substitutions)
+
+
+class Contents(Transform):
+
+    """
+    Insert a table of contents transform placeholder into the document after
+    the RFC 2822 header.
+    """
+
+    default_priority = 380
+
+    def apply(self):
+        pending = nodes.pending(parts.Contents, {'title': None})
+        self.document.insert(1, pending)
+        self.document.note_pending(pending)
+
+
+class TargetNotes(Transform):
+
+    """
+    Locate the "References" section, insert a placeholder for an external
+    target footnote insertion transform at the end, and schedule the
+    transform to run immediately.
+    """
+
+    default_priority = 520
+
+    def apply(self):
+        doc = self.document
+        i = len(doc) - 1
+        refsect = copyright = None
+        while i >= 0 and isinstance(doc[i], nodes.section):
+            title_words = doc[i][0].astext().lower().split()
+            if 'references' in title_words:
+                refsect = doc[i]
+                break
+            elif 'copyright' in title_words:
+                copyright = i
+            i -= 1
+        if not refsect:
+            refsect = nodes.section()
+            refsect += nodes.title('', 'References')
+            doc.set_id(refsect)
+            if copyright:
+                # Put the new "References" section before "Copyright":
+                doc.insert(copyright, refsect)
+            else:
+                # Put the new "References" section at end of doc:
+                doc.append(refsect)
+        pending = nodes.pending(references.TargetNotes)
+        refsect.append(pending)
+        self.document.note_pending(pending, 0)
+        pending = nodes.pending(misc.CallBack,
+                                details={'callback': self.cleanup_callback})
+        refsect.append(pending)
+        self.document.note_pending(pending, 1)
+
+    def cleanup_callback(self, pending):
+        """
+        Remove an empty "References" section.
+
+        Called after the `references.TargetNotes` transform is complete.
+        """
+        if len(pending.parent) == 2:    # <title> and <pending>
+            pending.parent.parent.remove(pending.parent)
+
+
+class PEPZero(Transform):
+
+    """
+    Special processing for PEP 0.
+    """
+
+    default_priority =760
+
+    def apply(self):
+        visitor = PEPZeroSpecial(self.document)
+        self.document.walk(visitor)
+        self.startnode.parent.remove(self.startnode)
+
+
+class PEPZeroSpecial(nodes.SparseNodeVisitor):
+
+    """
+    Perform the special processing needed by PEP 0:
+
+    - Mask email addresses.
+
+    - Link PEP numbers in the second column of 4-column tables to the PEPs
+      themselves.
+    """
+
+    pep_url = Headers.pep_url
+
+    def unknown_visit(self, node):
+        pass
+
+    def visit_reference(self, node):
+        node.parent.replace(node, mask_email(node))
+
+    def visit_field_list(self, node):
+        if node.hasattr('class') and node['class'] == 'rfc2822':
+            raise nodes.SkipNode
+
+    def visit_tgroup(self, node):
+        self.pep_table = node['cols'] == 4
+        self.entry = 0
+
+    def visit_colspec(self, node):
+        self.entry += 1
+        if self.pep_table and self.entry == 2:
+            node['class'] = 'num'
+
+    def visit_row(self, node):
+        self.entry = 0
+
+    def visit_entry(self, node):
+        self.entry += 1
+        if self.pep_table and self.entry == 2 and len(node) == 1:
+            node['class'] = 'num'
+            p = node[0]
+            if isinstance(p, nodes.paragraph) and len(p) == 1:
+                text = p.astext()
+                try:
+                    pep = int(text)
+                    ref = self.pep_url % pep
+                    p[0] = nodes.reference(text, text, refuri=ref)
+                except ValueError:
+                    pass
+
+
+non_masked_addresses = ('address@hidden',
+                        'address@hidden',
+                        'address@hidden')
+
+def mask_email(ref, pepno=None):
+    """
+    Mask the email address in `ref` and return a replacement node.
+
+    `ref` is returned unchanged if it contains no email address.
+
+    For email addresses such as "address@hidden", mask the address as "user at
+    host" (text) to thwart simple email address harvesters (except for those
+    listed in `non_masked_addresses`).  If a PEP number (`pepno`) is given,
+    return a reference including a default email subject.
+    """
+    if ref.hasattr('refuri') and ref['refuri'].startswith('mailto:'):
+        if ref['refuri'][8:] in non_masked_addresses:
+            replacement = ref[0]
+        else:
+            replacement_text = ref.astext().replace('@', '&#32;&#97;t&#32;')
+            replacement = nodes.raw('', replacement_text, format='html')
+        if pepno is None:
+            return replacement
+        else:
+            ref['refuri'] += '?subject=PEP%%20%s' % pepno
+            ref[:] = [replacement]
+            return ref
+    else:
+        return ref

Added: trunk/www/utils/helpers/docutils/docutils/transforms/references.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/transforms/references.py  
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/transforms/references.py  
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,762 @@
+# Author: David Goodger
+# Contact: address@hidden
+# Revision: $Revision: 1.18 $
+# Date: $Date: 2003/06/09 15:07:46 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+Transforms for resolving references.
+"""
+
+__docformat__ = 'reStructuredText'
+
+import sys
+import re
+from docutils import nodes, utils
+from docutils.transforms import TransformError, Transform
+
+
+indices = xrange(sys.maxint)
+
+
+class ChainedTargets(Transform):
+
+    """
+    Attributes "refuri" and "refname" are migrated from the final direct
+    target up the chain of contiguous adjacent internal targets, using
+    `ChainedTargetResolver`.
+    """
+
+    default_priority = 420
+
+    def apply(self):
+        visitor = ChainedTargetResolver(self.document)
+        self.document.walk(visitor)
+
+
+class ChainedTargetResolver(nodes.SparseNodeVisitor):
+
+    """
+    Copy reference attributes up the length of a hyperlink target chain.
+
+    "Chained targets" are multiple adjacent internal hyperlink targets which
+    "point to" an external or indirect target.  After the transform, all
+    chained targets will effectively point to the same place.
+
+    Given the following ``document`` as input::
+
+        <document>
+            <target id="a" name="a">
+            <target id="b" name="b">
+            <target id="c" name="c" refuri="http://chained.external.targets";>
+            <target id="d" name="d">
+            <paragraph>
+                I'm known as "d".
+            <target id="e" name="e">
+            <target id="id1">
+            <target id="f" name="f" refname="d">
+
+    ``ChainedTargetResolver(document).walk()`` will transform the above into::
+
+        <document>
+            <target id="a" name="a" refuri="http://chained.external.targets";>
+            <target id="b" name="b" refuri="http://chained.external.targets";>
+            <target id="c" name="c" refuri="http://chained.external.targets";>
+            <target id="d" name="d">
+            <paragraph>
+                I'm known as "d".
+            <target id="e" name="e" refname="d">
+            <target id="id1" refname="d">
+            <target id="f" name="f" refname="d">
+    """
+
+    def unknown_visit(self, node):
+        pass
+
+    def visit_target(self, node):
+        if node.hasattr('refuri'):
+            attname = 'refuri'
+            call_if_named = self.document.note_external_target
+        elif node.hasattr('refname'):
+            attname = 'refname'
+            call_if_named = self.document.note_indirect_target
+        elif node.hasattr('refid'):
+            attname = 'refid'
+            call_if_named = None
+        else:
+            return
+        attval = node[attname]
+        index = node.parent.index(node)
+        for i in range(index - 1, -1, -1):
+            sibling = node.parent[i]
+            if not isinstance(sibling, nodes.target) \
+                  or sibling.hasattr('refuri') \
+                  or sibling.hasattr('refname') \
+                  or sibling.hasattr('refid'):
+                break
+            sibling[attname] = attval
+            if sibling.hasattr('name') and call_if_named:
+                call_if_named(sibling)
+
+
+class AnonymousHyperlinks(Transform):
+
+    """
+    Link anonymous references to targets.  Given::
+
+        <paragraph>
+            <reference anonymous="1">
+                internal
+            <reference anonymous="1">
+                external
+        <target anonymous="1" id="id1">
+        <target anonymous="1" id="id2" refuri="http://external";>
+
+    Corresponding references are linked via "refid" or resolved via "refuri"::
+
+        <paragraph>
+            <reference anonymous="1" refid="id1">
+                text
+            <reference anonymous="1" refuri="http://external";>
+                external
+        <target anonymous="1" id="id1">
+        <target anonymous="1" id="id2" refuri="http://external";>
+    """
+
+    default_priority = 440
+
+    def apply(self):
+        if len(self.document.anonymous_refs) \
+              != len(self.document.anonymous_targets):
+            msg = self.document.reporter.error(
+                  'Anonymous hyperlink mismatch: %s references but %s '
+                  'targets.\nSee "backrefs" attribute for IDs.'
+                  % (len(self.document.anonymous_refs),
+                     len(self.document.anonymous_targets)))
+            msgid = self.document.set_id(msg)
+            for ref in self.document.anonymous_refs:
+                prb = nodes.problematic(
+                      ref.rawsource, ref.rawsource, refid=msgid)
+                prbid = self.document.set_id(prb)
+                msg.add_backref(prbid)
+                ref.parent.replace(ref, prb)
+            return
+        for ref, target in zip(self.document.anonymous_refs,
+                               self.document.anonymous_targets):
+            if target.hasattr('refuri'):
+                ref['refuri'] = target['refuri']
+                ref.resolved = 1
+            else:
+                ref['refid'] = target['id']
+                self.document.note_refid(ref)
+            target.referenced = 1
+
+
+class IndirectHyperlinks(Transform):
+
+    """
+    a) Indirect external references::
+
+           <paragraph>
+               <reference refname="indirect external">
+                   indirect external
+           <target id="id1" name="direct external"
+               refuri="http://indirect";>
+           <target id="id2" name="indirect external"
+               refname="direct external">
+
+       The "refuri" attribute is migrated back to all indirect targets
+       from the final direct target (i.e. a target not referring to
+       another indirect target)::
+
+           <paragraph>
+               <reference refname="indirect external">
+                   indirect external
+           <target id="id1" name="direct external"
+               refuri="http://indirect";>
+           <target id="id2" name="indirect external"
+               refuri="http://indirect";>
+
+       Once the attribute is migrated, the preexisting "refname" attribute
+       is dropped.
+
+    b) Indirect internal references::
+
+           <target id="id1" name="final target">
+           <paragraph>
+               <reference refname="indirect internal">
+                   indirect internal
+           <target id="id2" name="indirect internal 2"
+               refname="final target">
+           <target id="id3" name="indirect internal"
+               refname="indirect internal 2">
+
+       Targets which indirectly refer to an internal target become one-hop
+       indirect (their "refid" attributes are directly set to the internal
+       target's "id"). References which indirectly refer to an internal
+       target become direct internal references::
+
+           <target id="id1" name="final target">
+           <paragraph>
+               <reference refid="id1">
+                   indirect internal
+           <target id="id2" name="indirect internal 2" refid="id1">
+           <target id="id3" name="indirect internal" refid="id1">
+    """
+
+    default_priority = 460
+
+    def apply(self):
+        for target in self.document.indirect_targets:
+            if not target.resolved:
+                self.resolve_indirect_target(target)
+            self.resolve_indirect_references(target)
+
+    def resolve_indirect_target(self, target):
+        refname = target['refname']
+        reftarget_id = self.document.nameids.get(refname)
+        if not reftarget_id:
+            self.nonexistent_indirect_target(target)
+            return
+        reftarget = self.document.ids[reftarget_id]
+        if isinstance(reftarget, nodes.target) \
+              and not reftarget.resolved and reftarget.hasattr('refname'):
+            if hasattr(target, 'multiply_indirect'):
+                #and target.multiply_indirect):
+                #del target.multiply_indirect
+                self.circular_indirect_reference(target)
+                return
+            target.multiply_indirect = 1
+            self.resolve_indirect_target(reftarget) # multiply indirect
+            del target.multiply_indirect
+        if reftarget.hasattr('refuri'):
+            target['refuri'] = reftarget['refuri']
+            if target.hasattr('name'):
+                self.document.note_external_target(target)
+        elif reftarget.hasattr('refid'):
+            target['refid'] = reftarget['refid']
+            self.document.note_refid(target)
+        else:
+            try:
+                target['refid'] = reftarget['id']
+                self.document.note_refid(target)
+            except KeyError:
+                self.nonexistent_indirect_target(target)
+                return
+        del target['refname']
+        target.resolved = 1
+        reftarget.referenced = 1
+
+    def nonexistent_indirect_target(self, target):
+        self.indirect_target_error(target, 'which does not exist')
+
+    def circular_indirect_reference(self, target):
+        self.indirect_target_error(target, 'forming a circular reference')
+
+    def indirect_target_error(self, target, explanation):
+        naming = ''
+        if target.hasattr('name'):
+            naming = '"%s" ' % target['name']
+            reflist = self.document.refnames.get(target['name'], [])
+        else:
+            reflist = self.document.refids.get(target['id'], [])
+        naming += '(id="%s")' % target['id']
+        msg = self.document.reporter.error(
+              'Indirect hyperlink target %s refers to target "%s", %s.'
+              % (naming, target['refname'], explanation),
+              base_node=target)
+        msgid = self.document.set_id(msg)
+        for ref in reflist:
+            prb = nodes.problematic(
+                  ref.rawsource, ref.rawsource, refid=msgid)
+            prbid = self.document.set_id(prb)
+            msg.add_backref(prbid)
+            ref.parent.replace(ref, prb)
+        target.resolved = 1
+
+    def resolve_indirect_references(self, target):
+        if target.hasattr('refid'):
+            attname = 'refid'
+            call_if_named = 0
+            call_method = self.document.note_refid
+        elif target.hasattr('refuri'):
+            attname = 'refuri'
+            call_if_named = 1
+            call_method = self.document.note_external_target
+        else:
+            return
+        attval = target[attname]
+        if target.hasattr('name'):
+            name = target['name']
+            try:
+                reflist = self.document.refnames[name]
+            except KeyError, instance:
+                if target.referenced:
+                    return
+                msg = self.document.reporter.info(
+                      'Indirect hyperlink target "%s" is not referenced.'
+                      % name, base_node=target)
+                target.referenced = 1
+                return
+            delatt = 'refname'
+        else:
+            id = target['id']
+            try:
+                reflist = self.document.refids[id]
+            except KeyError, instance:
+                if target.referenced:
+                    return
+                msg = self.document.reporter.info(
+                      'Indirect hyperlink target id="%s" is not referenced.'
+                      % id, base_node=target)
+                target.referenced = 1
+                return
+            delatt = 'refid'
+        for ref in reflist:
+            if ref.resolved:
+                continue
+            del ref[delatt]
+            ref[attname] = attval
+            if not call_if_named or ref.hasattr('name'):
+                call_method(ref)
+            ref.resolved = 1
+            if isinstance(ref, nodes.target):
+                self.resolve_indirect_references(ref)
+        target.referenced = 1
+
+
+class ExternalTargets(Transform):
+
+    """
+    Given::
+
+        <paragraph>
+            <reference refname="direct external">
+                direct external
+        <target id="id1" name="direct external" refuri="http://direct";>
+
+    The "refname" attribute is replaced by the direct "refuri" attribute::
+
+        <paragraph>
+            <reference refuri="http://direct";>
+                direct external
+        <target id="id1" name="direct external" refuri="http://direct";>
+    """
+
+    default_priority = 640
+
+    def apply(self):
+        for target in self.document.external_targets:
+            if target.hasattr('refuri') and target.hasattr('name'):
+                name = target['name']
+                refuri = target['refuri']
+                try:
+                    reflist = self.document.refnames[name]
+                except KeyError, instance:
+                    if target.referenced:
+                        continue
+                    msg = self.document.reporter.info(
+                          'External hyperlink target "%s" is not referenced.'
+                          % name, base_node=target)
+                    target.referenced = 1
+                    continue
+                for ref in reflist:
+                    if ref.resolved:
+                        continue
+                    del ref['refname']
+                    ref['refuri'] = refuri
+                    ref.resolved = 1
+                target.referenced = 1
+
+
+class InternalTargets(Transform):
+
+    """
+    Given::
+
+        <paragraph>
+            <reference refname="direct internal">
+                direct internal
+        <target id="id1" name="direct internal">
+
+    The "refname" attribute is replaced by "refid" linking to the target's
+    "id"::
+
+        <paragraph>
+            <reference refid="id1">
+                direct internal
+        <target id="id1" name="direct internal">
+    """
+
+    default_priority = 660
+
+    def apply(self):
+        for target in self.document.internal_targets:
+            if target.hasattr('refuri') or target.hasattr('refid') \
+                  or not target.hasattr('name'):
+                continue
+            name = target['name']
+            refid = target['id']
+            try:
+                reflist = self.document.refnames[name]
+            except KeyError, instance:
+                if target.referenced:
+                    continue
+                msg = self.document.reporter.info(
+                      'Internal hyperlink target "%s" is not referenced.'
+                      % name, base_node=target)
+                target.referenced = 1
+                continue
+            for ref in reflist:
+                if ref.resolved:
+                    continue
+                del ref['refname']
+                ref['refid'] = refid
+                ref.resolved = 1
+            target.referenced = 1
+
+
+class Footnotes(Transform):
+
+    """
+    Assign numbers to autonumbered footnotes, and resolve links to footnotes,
+    citations, and their references.
+
+    Given the following ``document`` as input::
+
+        <document>
+            <paragraph>
+                A labeled autonumbered footnote referece:
+                <footnote_reference auto="1" id="id1" refname="footnote">
+            <paragraph>
+                An unlabeled autonumbered footnote referece:
+                <footnote_reference auto="1" id="id2">
+            <footnote auto="1" id="id3">
+                <paragraph>
+                    Unlabeled autonumbered footnote.
+            <footnote auto="1" id="footnote" name="footnote">
+                <paragraph>
+                    Labeled autonumbered footnote.
+
+    Auto-numbered footnotes have attribute ``auto="1"`` and no label.
+    Auto-numbered footnote_references have no reference text (they're
+    empty elements). When resolving the numbering, a ``label`` element
+    is added to the beginning of the ``footnote``, and reference text
+    to the ``footnote_reference``.
+
+    The transformed result will be::
+
+        <document>
+            <paragraph>
+                A labeled autonumbered footnote referece:
+                <footnote_reference auto="1" id="id1" refid="footnote">
+                    2
+            <paragraph>
+                An unlabeled autonumbered footnote referece:
+                <footnote_reference auto="1" id="id2" refid="id3">
+                    1
+            <footnote auto="1" id="id3" backrefs="id2">
+                <label>
+                    1
+                <paragraph>
+                    Unlabeled autonumbered footnote.
+            <footnote auto="1" id="footnote" name="footnote" backrefs="id1">
+                <label>
+                    2
+                <paragraph>
+                    Labeled autonumbered footnote.
+
+    Note that the footnotes are not in the same order as the references.
+
+    The labels and reference text are added to the auto-numbered ``footnote``
+    and ``footnote_reference`` elements.  Footnote elements are backlinked to
+    their references via "refids" attributes.  References are assigned "id"
+    and "refid" attributes.
+
+    After adding labels and reference text, the "auto" attributes can be
+    ignored.
+    """
+
+    default_priority = 620
+
+    autofootnote_labels = None
+    """Keep track of unlabeled autonumbered footnotes."""
+
+    symbols = [
+          # Entries 1-4 and 6 below are from section 12.51 of
+          # The Chicago Manual of Style, 14th edition.
+          '*',                          # asterisk/star
+          u'\u2020',                    # dagger &dagger;
+          u'\u2021',                    # double dagger &Dagger;
+          u'\u00A7',                    # section mark &sect;
+          u'\u00B6',                    # paragraph mark (pilcrow) &para;
+                                        # (parallels ['||'] in CMoS)
+          '#',                          # number sign
+          # The entries below were chosen arbitrarily.
+          u'\u2660',                    # spade suit &spades;
+          u'\u2665',                    # heart suit &hearts;
+          u'\u2666',                    # diamond suit &diams;
+          u'\u2663',                    # club suit &clubs;
+          ]
+
+    def apply(self):
+        self.autofootnote_labels = []
+        startnum = self.document.autofootnote_start
+        self.document.autofootnote_start = self.number_footnotes(startnum)
+        self.number_footnote_references(startnum)
+        self.symbolize_footnotes()
+        self.resolve_footnotes_and_citations()
+
+    def number_footnotes(self, startnum):
+        """
+        Assign numbers to autonumbered footnotes.
+
+        For labeled autonumbered footnotes, copy the number over to
+        corresponding footnote references.
+        """
+        for footnote in self.document.autofootnotes:
+            while 1:
+                label = str(startnum)
+                startnum += 1
+                if not self.document.nameids.has_key(label):
+                    break
+            footnote.insert(0, nodes.label('', label))
+            if footnote.hasattr('dupname'):
+                continue
+            if footnote.hasattr('name'):
+                name = footnote['name']
+                for ref in self.document.footnote_refs.get(name, []):
+                    ref += nodes.Text(label)
+                    ref.delattr('refname')
+                    ref['refid'] = footnote['id']
+                    footnote.add_backref(ref['id'])
+                    self.document.note_refid(ref)
+                    ref.resolved = 1
+            else:
+                footnote['name'] = label
+                self.document.note_explicit_target(footnote, footnote)
+                self.autofootnote_labels.append(label)
+        return startnum
+
+    def number_footnote_references(self, startnum):
+        """Assign numbers to autonumbered footnote references."""
+        i = 0
+        for ref in self.document.autofootnote_refs:
+            if ref.resolved or ref.hasattr('refid'):
+                continue
+            try:
+                label = self.autofootnote_labels[i]
+            except IndexError:
+                msg = self.document.reporter.error(
+                      'Too many autonumbered footnote references: only %s '
+                      'corresponding footnotes available.'
+                      % len(self.autofootnote_labels), base_node=ref)
+                msgid = self.document.set_id(msg)
+                for ref in self.document.autofootnote_refs[i:]:
+                    if ref.resolved or ref.hasattr('refname'):
+                        continue
+                    prb = nodes.problematic(
+                          ref.rawsource, ref.rawsource, refid=msgid)
+                    prbid = self.document.set_id(prb)
+                    msg.add_backref(prbid)
+                    ref.parent.replace(ref, prb)
+                break
+            ref += nodes.Text(label)
+            id = self.document.nameids[label]
+            footnote = self.document.ids[id]
+            ref['refid'] = id
+            self.document.note_refid(ref)
+            footnote.add_backref(ref['id'])
+            ref.resolved = 1
+            i += 1
+
+    def symbolize_footnotes(self):
+        """Add symbols indexes to "[*]"-style footnotes and references."""
+        labels = []
+        for footnote in self.document.symbol_footnotes:
+            reps, index = divmod(self.document.symbol_footnote_start,
+                                 len(self.symbols))
+            labeltext = self.symbols[index] * (reps + 1)
+            labels.append(labeltext)
+            footnote.insert(0, nodes.label('', labeltext))
+            self.document.symbol_footnote_start += 1
+            self.document.set_id(footnote)
+        i = 0
+        for ref in self.document.symbol_footnote_refs:
+            try:
+                ref += nodes.Text(labels[i])
+            except IndexError:
+                msg = self.document.reporter.error(
+                      'Too many symbol footnote references: only %s '
+                      'corresponding footnotes available.' % len(labels),
+                      base_node=ref)
+                msgid = self.document.set_id(msg)
+                for ref in self.document.symbol_footnote_refs[i:]:
+                    if ref.resolved or ref.hasattr('refid'):
+                        continue
+                    prb = nodes.problematic(
+                          ref.rawsource, ref.rawsource, refid=msgid)
+                    prbid = self.document.set_id(prb)
+                    msg.add_backref(prbid)
+                    ref.parent.replace(ref, prb)
+                break
+            footnote = self.document.symbol_footnotes[i]
+            ref['refid'] = footnote['id']
+            self.document.note_refid(ref)
+            footnote.add_backref(ref['id'])
+            i += 1
+
+    def resolve_footnotes_and_citations(self):
+        """
+        Link manually-labeled footnotes and citations to/from their
+        references.
+        """
+        for footnote in self.document.footnotes:
+            label = footnote['name']
+            if self.document.footnote_refs.has_key(label):
+                reflist = self.document.footnote_refs[label]
+                self.resolve_references(footnote, reflist)
+        for citation in self.document.citations:
+            label = citation['name']
+            if self.document.citation_refs.has_key(label):
+                reflist = self.document.citation_refs[label]
+                self.resolve_references(citation, reflist)
+
+    def resolve_references(self, note, reflist):
+        id = note['id']
+        for ref in reflist:
+            if ref.resolved:
+                continue
+            ref.delattr('refname')
+            ref['refid'] = id
+            note.add_backref(ref['id'])
+            ref.resolved = 1
+        note.resolved = 1
+
+
+class Substitutions(Transform):
+
+    """
+    Given the following ``document`` as input::
+
+        <document>
+            <paragraph>
+                The
+                <substitution_reference refname="biohazard">
+                    biohazard
+                 symbol is deservedly scary-looking.
+            <substitution_definition name="biohazard">
+                <image alt="biohazard" uri="biohazard.png">
+
+    The ``substitution_reference`` will simply be replaced by the
+    contents of the corresponding ``substitution_definition``.
+
+    The transformed result will be::
+
+        <document>
+            <paragraph>
+                The
+                <image alt="biohazard" uri="biohazard.png">
+                 symbol is deservedly scary-looking.
+            <substitution_definition name="biohazard">
+                <image alt="biohazard" uri="biohazard.png">
+    """
+
+    default_priority = 220
+    """The Substitutions transform has to be applied very early, before
+    `docutils.tranforms.frontmatter.DocTitle` and others."""
+
+    def apply(self):
+        defs = self.document.substitution_defs
+        normed = self.document.substitution_names
+        for refname, refs in self.document.substitution_refs.items():
+            for ref in refs:
+                key = None
+                if defs.has_key(refname):
+                    key = refname
+                else:
+                    normed_name = refname.lower()
+                    if normed.has_key(normed_name):
+                        key = normed[normed_name]
+                if key is None:
+                    msg = self.document.reporter.error(
+                          'Undefined substitution referenced: "%s".'
+                          % refname, base_node=ref)
+                    msgid = self.document.set_id(msg)
+                    prb = nodes.problematic(
+                          ref.rawsource, ref.rawsource, refid=msgid)
+                    prbid = self.document.set_id(prb)
+                    msg.add_backref(prbid)
+                    ref.parent.replace(ref, prb)
+                else:
+                    ref.parent.replace(ref, defs[key].get_children())
+        self.document.substitution_refs = None  # release replaced references
+
+
+class TargetNotes(Transform):
+
+    """
+    Creates a footnote for each external target in the text, and corresponding
+    footnote references after each reference.
+    """
+
+    default_priority = 540
+    """The TargetNotes transform has to be applied after `IndirectHyperlinks`
+    but before `Footnotes`."""
+
+    def apply(self):
+        notes = {}
+        nodelist = []
+        for target in self.document.external_targets:
+            name = target.get('name')
+            if not name:
+                print >>sys.stderr, 'no name on target: %r' % target
+                continue
+            refs = self.document.refnames.get(name, [])
+            if not refs:
+                continue
+            footnote = self.make_target_footnote(target, refs, notes)
+            if not notes.has_key(target['refuri']):
+                notes[target['refuri']] = footnote
+                nodelist.append(footnote)
+        if len(self.document.anonymous_targets) \
+               == len(self.document.anonymous_refs):
+            for target, ref in zip(self.document.anonymous_targets,
+                                   self.document.anonymous_refs):
+                if target.hasattr('refuri'):
+                    footnote = self.make_target_footnote(target, [ref], notes)
+                    if not notes.has_key(target['refuri']):
+                        notes[target['refuri']] = footnote
+                        nodelist.append(footnote)
+        self.startnode.parent.replace(self.startnode, nodelist)
+
+    def make_target_footnote(self, target, refs, notes):
+        refuri = target['refuri']
+        if notes.has_key(refuri):  # duplicate?
+            footnote = notes[refuri]
+            footnote_name = footnote['name']
+        else:                           # original
+            footnote = nodes.footnote()
+            footnote_id = self.document.set_id(footnote)
+            # Use a colon; they can't be produced inside names by the parser:
+            footnote_name = 'target_note: ' + footnote_id
+            footnote['auto'] = 1
+            footnote['name'] = footnote_name
+            footnote_paragraph = nodes.paragraph()
+            footnote_paragraph += nodes.reference('', refuri, refuri=refuri)
+            footnote += footnote_paragraph
+            self.document.note_autofootnote(footnote)
+            self.document.note_explicit_target(footnote, footnote)
+        for ref in refs:
+            if isinstance(ref, nodes.target):
+                continue
+            refnode = nodes.footnote_reference(
+                refname=footnote_name, auto=1)
+            self.document.note_autofootnote_ref(refnode)
+            self.document.note_footnote_ref(refnode)
+            index = ref.parent.index(ref) + 1
+            reflist = [refnode]
+            if not self.document.settings.trim_footnote_reference_space:
+                reflist.insert(0, nodes.Text(' '))
+            ref.parent.insert(index, reflist)
+        return footnote

Added: trunk/www/utils/helpers/docutils/docutils/transforms/universal.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/transforms/universal.py   
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/transforms/universal.py   
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,185 @@
+# Authors: David Goodger, Ueli Schlaepfer
+# Contact: address@hidden
+# Revision: $Revision: 1.19 $
+# Date: $Date: 2003/04/10 16:03:45 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+Transforms needed by most or all documents:
+
+- `Decorations`: Generate a document's header & footer.
+- `Messages`: Placement of system messages stored in
+  `nodes.document.transform_messages`.
+- `TestMessages`: Like `Messages`, used on test runs.
+- `FinalReferences`: Resolve remaining references.
+"""
+
+__docformat__ = 'reStructuredText'
+
+import re
+import sys
+import time
+from docutils import nodes, utils
+from docutils.transforms import TransformError, Transform
+
+
+class Decorations(Transform):
+
+    """
+    Populate a document's decoration element (header, footer).
+    """
+
+    default_priority = 820
+
+    def apply(self):
+        header = self.generate_header()
+        footer = self.generate_footer()
+        if header or footer:
+            decoration = nodes.decoration()
+            decoration += header
+            decoration += footer
+            document = self.document
+            index = document.first_child_not_matching_class(
+                nodes.PreDecorative)
+            if index is None:
+                document += decoration
+            else:
+                document[index:index] = [decoration]
+
+    def generate_header(self):
+        return None
+
+    def generate_footer(self):
+        # @@@ Text is hard-coded for now.
+        # Should be made dynamic (language-dependent).
+        settings = self.document.settings
+        if settings.generator or settings.datestamp or settings.source_link \
+               or settings.source_url:
+            text = []
+            if settings.source_link and settings._source \
+                   or settings.source_url:
+                if settings.source_url:
+                    source = settings.source_url
+                else:
+                    source = utils.relative_path(settings._destination,
+                                                 settings._source)
+                text.extend([
+                    nodes.reference('', 'View document source',
+                                    refuri=source),
+                    nodes.Text('.\n')])
+            if settings.datestamp:
+                datestamp = time.strftime(settings.datestamp, time.gmtime())
+                text.append(nodes.Text('Generated on: ' + datestamp + '.\n'))
+            if settings.generator:
+                text.extend([
+                    nodes.Text('Generated by '),
+                    nodes.reference('', 'Docutils', refuri=
+                                    'http://docutils.sourceforge.net/'),
+                    nodes.Text(' from '),
+                    nodes.reference('', 'reStructuredText', refuri='http://'
+                                    'docutils.sourceforge.net/rst.html'),
+                    nodes.Text(' source.\n')])
+            footer = nodes.footer()
+            footer += nodes.paragraph('', '', *text)
+            return footer
+        else:
+            return None
+
+
+class Messages(Transform):
+
+    """
+    Place any system messages generated after parsing into a dedicated section
+    of the document.
+    """
+
+    default_priority = 860
+
+    def apply(self):
+        unfiltered = self.document.transform_messages
+        threshold = self.document.reporter['writer'].report_level
+        messages = []
+        for msg in unfiltered:
+            if msg['level'] >= threshold and not msg.parent:
+                messages.append(msg)
+        if messages:
+            section = nodes.section(CLASS='system-messages')
+            # @@@ get this from the language module?
+            section += nodes.title('', 'Docutils System Messages')
+            section += messages
+            self.document.transform_messages[:] = []
+            self.document += section
+
+
+class TestMessages(Transform):
+
+    """
+    Append all post-parse system messages to the end of the document.
+    """
+
+    default_priority = 890
+
+    def apply(self):
+        for msg in self.document.transform_messages:
+            if not msg.parent:
+                self.document += msg
+
+
+class FinalChecks(Transform):
+
+    """
+    Perform last-minute checks.
+
+    - Check for dangling references (incl. footnote & citation).
+    """
+
+    default_priority = 840
+
+    def apply(self):
+        visitor = FinalCheckVisitor(self.document)
+        self.document.walk(visitor)
+        if self.document.settings.expose_internals:
+            visitor = InternalAttributeExposer(self.document)
+            self.document.walk(visitor)
+
+
+class FinalCheckVisitor(nodes.SparseNodeVisitor):
+
+    def unknown_visit(self, node):
+        pass
+
+    def visit_reference(self, node):
+        if node.resolved or not node.hasattr('refname'):
+            return
+        refname = node['refname']
+        id = self.document.nameids.get(refname)
+        if id is None:
+            msg = self.document.reporter.error(
+                  'Unknown target name: "%s".' % (node['refname']),
+                  base_node=node)
+            msgid = self.document.set_id(msg)
+            prb = nodes.problematic(
+                  node.rawsource, node.rawsource, refid=msgid)
+            prbid = self.document.set_id(prb)
+            msg.add_backref(prbid)
+            node.parent.replace(node, prb)
+        else:
+            del node['refname']
+            node['refid'] = id
+            self.document.ids[id].referenced = 1
+            node.resolved = 1
+
+    visit_footnote_reference = visit_citation_reference = visit_reference
+
+
+class InternalAttributeExposer(nodes.GenericNodeVisitor):
+
+    def __init__(self, document):
+        nodes.GenericNodeVisitor.__init__(self, document)
+        self.internal_attributes = document.settings.expose_internals
+
+    def default_visit(self, node):
+        for att in self.internal_attributes:
+            value = getattr(node, att, None)
+            if value is not None:
+                node['internal:' + att] = value

Added: trunk/www/utils/helpers/docutils/docutils/urischemes.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/urischemes.py     2004-03-07 
00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/urischemes.py     2004-03-07 
06:27:44 UTC (rev 5249)
@@ -0,0 +1,105 @@
+"""
+`schemes` is a dictionary with lowercase URI addressing schemes as
+keys and descriptions as values. It was compiled from the index at
+http://www.w3.org/Addressing/schemes.html (revised 2001-08-20).
+"""
+
+# Many values are blank and should be filled in with useful descriptions.
+
+schemes = {
+      'about': 'provides information on Navigator',
+      'acap': 'Application Configuration Access Protocol',
+      'addbook': "To add vCard entries to Communicator's Address Book",
+      'afp': 'Apple Filing Protocol',
+      'afs': 'Andrew File System global file names',
+      'aim': 'AOL Instant Messenger',
+      'callto': 'for NetMeeting links',
+      'castanet': 'Castanet Tuner URLs for Netcaster',
+      'chttp': 'cached HTTP supported by RealPlayer',
+      'cid': 'content identifier',
+      'data': ('allows inclusion of small data items as "immediate" data; '
+               'RFC 2397'),
+      'dav': 'Distributed Authoring and Versioning Protocol; RFC 2518',
+      'dns': 'Domain Name System resources',
+      'eid': ('External ID; non-URL data; general escape mechanism to allow '
+              'access to information for applications that are too '
+              'specialized to justify their own schemes'),
+      'fax': ('a connection to a terminal that can handle telefaxes '
+              '(facsimiles); RFC 2806'),
+      'file': 'Host-specific file names',
+      'finger': '',
+      'freenet': '',
+      'ftp': 'File Transfer Protocol',
+      'gopher': 'The Gopher Protocol',
+      'gsm-sms': ('Global System for Mobile Communications Short Message '
+                  'Service'),
+      'h323': 'video (audiovisual) communication on local area networks',
+      'h324': ('video and audio communications over low bitrate connections '
+               'such as POTS modem connections'),
+      'hdl': 'CNRI handle system',
+      'hnews': 'an HTTP-tunneling variant of the NNTP news protocol',
+      'http': 'Hypertext Transfer Protocol',
+      'https': 'HTTP over SSL',
+      'iioploc': 'Internet Inter-ORB Protocol Location?',
+      'ilu': 'Inter-Language Unification',
+      'imap': 'Internet Message Access Protocol',
+      'ior': 'CORBA interoperable object reference',
+      'ipp': 'Internet Printing Protocol',
+      'irc': 'Internet Relay Chat',
+      'jar': 'Java archive',
+      'javascript': ('JavaScript code; evaluates the expression after the '
+                     'colon'),
+      'jdbc': '',
+      'ldap': 'Lightweight Directory Access Protocol',
+      'lifn': '',
+      'livescript': '',
+      'lrq': '',
+      'mailbox': 'Mail folder access',
+      'mailserver': 'Access to data available from mail servers',
+      'mailto': 'Electronic mail address',
+      'md5': '',
+      'mid': 'message identifier',
+      'mocha': '',
+      'modem': ('a connection to a terminal that can handle incoming data '
+                'calls; RFC 2806'),
+      'news': 'USENET news',
+      'nfs': 'Network File System protocol',
+      'nntp': 'USENET news using NNTP access',
+      'opaquelocktoken': '',
+      'phone': '',
+      'pop': 'Post Office Protocol',
+      'pop3': 'Post Office Protocol v3',
+      'printer': '',
+      'prospero': 'Prospero Directory Service',
+      'res': '',
+      'rtsp': 'real time streaming protocol',
+      'rvp': '',
+      'rwhois': '',
+      'rx': 'Remote Execution',
+      'sdp': '',
+      'service': 'service location',
+      'shttp': 'secure hypertext transfer protocol',
+      'sip': 'Session Initiation Protocol',
+      'smb': '',
+      'snews': 'For NNTP postings via SSL',
+      't120': 'real time data conferencing (audiographics)',
+      'tcp': '',
+      'tel': ('a connection to a terminal that handles normal voice '
+              'telephone calls, a voice mailbox or another voice messaging '
+              'system or a service that can be operated using DTMF tones; '
+              'RFC 2806.'),
+      'telephone': 'telephone',
+      'telnet': 'Reference to interactive sessions',
+      'tip': 'Transaction Internet Protocol',
+      'tn3270': 'Interactive 3270 emulation sessions',
+      'tv': '',
+      'urn': 'Uniform Resource Name',
+      'uuid': '',
+      'vemmi': 'versatile multimedia interface',
+      'videotex': '',
+      'view-source': 'displays HTML code that was generated with JavaScript',
+      'wais': 'Wide Area Information Servers',
+      'whodp': '',
+      'whois++': 'Distributed directory service.',
+      'z39.50r': 'Z39.50 Retrieval',
+      'z39.50s': 'Z39.50 Session',}

Added: trunk/www/utils/helpers/docutils/docutils/utils.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/utils.py  2004-03-07 00:58:55 UTC 
(rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/utils.py  2004-03-07 06:27:44 UTC 
(rev 5249)
@@ -0,0 +1,446 @@
+# Author: David Goodger
+# Contact: address@hidden
+# Revision: $Revision: 1.28 $
+# Date: $Date: 2003/06/16 21:29:34 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+Miscellaneous utilities for the documentation utilities.
+"""
+
+__docformat__ = 'reStructuredText'
+
+import sys
+import os
+import os.path
+from types import StringType, UnicodeType
+from docutils import ApplicationError, DataError
+from docutils import frontend, nodes
+
+
+class SystemMessage(ApplicationError):
+
+    def __init__(self, system_message):
+        Exception.__init__(self, system_message.astext())
+
+
+class Reporter:
+
+    """
+    Info/warning/error reporter and ``system_message`` element generator.
+
+    Five levels of system messages are defined, along with corresponding
+    methods: `debug()`, `info()`, `warning()`, `error()`, and `severe()`.
+
+    There is typically one Reporter object per process.  A Reporter object is
+    instantiated with thresholds for reporting (generating warnings) and
+    halting processing (raising exceptions), a switch to turn debug output on
+    or off, and an I/O stream for warnings.  These are stored in the default
+    reporting category, '' (zero-length string).
+
+    Multiple reporting categories [#]_ may be set, each with its own reporting
+    and halting thresholds, debugging switch, and warning stream
+    (collectively a `ConditionSet`).  Categories are hierarchical dotted-name
+    strings that look like attribute references: 'spam', 'spam.eggs',
+    'neeeow.wum.ping'.  The 'spam' category is the ancestor of
+    'spam.bacon.eggs'.  Unset categories inherit stored conditions from their
+    closest ancestor category that has been set.
+
+    When a system message is generated, the stored conditions from its
+    category (or ancestor if unset) are retrieved.  The system message level
+    is compared to the thresholds stored in the category, and a warning or
+    error is generated as appropriate.  Debug messages are produced iff the
+    stored debug switch is on.  Message output is sent to the stored warning
+    stream.
+
+    The default category is '' (empty string).  By convention, Writers should
+    retrieve reporting conditions from the 'writer' category (which, unless
+    explicitly set, defaults to the conditions of the default category).
+
+    The Reporter class also employs a modified form of the "Observer" pattern
+    [GoF95]_ to track system messages generated.  The `attach_observer` method
+    should be called before parsing, with a bound method or function which
+    accepts system messages.  The observer can be removed with
+    `detach_observer`, and another added in its place.
+
+    .. [#] The concept of "categories" was inspired by the log4j project:
+       http://jakarta.apache.org/log4j/.
+
+    .. [GoF95] Gamma, Helm, Johnson, Vlissides. *Design Patterns: Elements of
+       Reusable Object-Oriented Software*. Addison-Wesley, Reading, MA, USA,
+       1995.
+    """
+
+    levels = 'DEBUG INFO WARNING ERROR SEVERE'.split()
+    """List of names for system message levels, indexed by level."""
+
+    def __init__(self, source, report_level, halt_level, stream=None,
+                 debug=0, encoding='ascii', error_handler='replace'):
+        """
+        Initialize the `ConditionSet` forthe `Reporter`'s default category.
+
+        :Parameters:
+
+            - `source`: The path to or description of the source data.
+            - `report_level`: The level at or above which warning output will
+              be sent to `stream`.
+            - `halt_level`: The level at or above which `SystemMessage`
+              exceptions will be raised, halting execution.
+            - `debug`: Show debug (level=0) system messages?
+            - `stream`: Where warning output is sent.  Can be file-like (has a
+              ``.write`` method), a string (file name, opened for writing), or
+              `None` (implies `sys.stderr`; default).
+            - `encoding`: The encoding for stderr output.
+            - `error_handler`: The error handler for stderr output encoding.
+        """
+        self.source = source
+        """The path to or description of the source data."""
+        
+        if stream is None:
+            stream = sys.stderr
+        elif type(stream) in (StringType, UnicodeType):
+            raise NotImplementedError('This should open a file for writing.')
+
+        self.encoding = encoding
+        """The character encoding for the stderr output."""
+
+        self.error_handler = error_handler
+        """The character encoding error handler."""
+
+        self.categories = {'': ConditionSet(debug, report_level, halt_level,
+                                            stream)}
+        """Mapping of category names to conditions. Default category is ''."""
+
+        self.observers = []
+        """List of bound methods or functions to call with each system_message
+        created."""
+
+        self.max_level = -1
+        """The highest level system message generated so far."""
+
+    def set_conditions(self, category, report_level, halt_level,
+                       stream=None, debug=0):
+        if stream is None:
+            stream = sys.stderr
+        self.categories[category] = ConditionSet(debug, report_level,
+                                                 halt_level, stream)
+
+    def unset_conditions(self, category):
+        if category and self.categories.has_key(category):
+            del self.categories[category]
+
+    __delitem__ = unset_conditions
+
+    def get_conditions(self, category):
+        while not self.categories.has_key(category):
+            category = category[:category.rfind('.') + 1][:-1]
+        return self.categories[category]
+
+    __getitem__ = get_conditions
+
+    def attach_observer(self, observer):
+        """
+        The `observer` parameter is a function or bound method which takes one
+        argument, a `nodes.system_message` instance.
+        """
+        self.observers.append(observer)
+
+    def detach_observer(self, observer):
+        self.observers.remove(observer)
+
+    def notify_observers(self, message):
+        for observer in self.observers:
+            observer(message)
+
+    def system_message(self, level, message, *children, **kwargs):
+        """
+        Return a system_message object.
+
+        Raise an exception or generate a warning if appropriate.
+        """
+        attributes = kwargs.copy()
+        category = kwargs.get('category', '')
+        if kwargs.has_key('category'):
+            del attributes['category']
+        if kwargs.has_key('base_node'):
+            source, line = get_source_line(kwargs['base_node'])
+            del attributes['base_node']
+            if source is not None:
+                attributes.setdefault('source', source)
+            if line is not None:
+                attributes.setdefault('line', line)
+        attributes.setdefault('source', self.source)
+        msg = nodes.system_message(message, level=level,
+                                   type=self.levels[level],
+                                   *children, **attributes)
+        debug, report_level, halt_level, stream = self[category].astuple()
+        if level >= report_level or debug and level == 0:
+            msgtext = msg.astext().encode(self.encoding, self.error_handler)
+            if category:
+                print >>stream, msgtext, '[%s]' % category
+            else:
+                print >>stream, msgtext
+        if level >= halt_level:
+            raise SystemMessage(msg)
+        if level > 0 or debug:
+            self.notify_observers(msg)
+        self.max_level = max(level, self.max_level)
+        return msg
+
+    def debug(self, *args, **kwargs):
+        """
+        Level-0, "DEBUG": an internal reporting issue. Typically, there is no
+        effect on the processing. Level-0 system messages are handled
+        separately from the others.
+        """
+        return self.system_message(0, *args, **kwargs)
+
+    def info(self, *args, **kwargs):
+        """
+        Level-1, "INFO": a minor issue that can be ignored. Typically there is
+        no effect on processing, and level-1 system messages are not reported.
+        """
+        return self.system_message(1, *args, **kwargs)
+
+    def warning(self, *args, **kwargs):
+        """
+        Level-2, "WARNING": an issue that should be addressed. If ignored,
+        there may be unpredictable problems with the output.
+        """
+        return self.system_message(2, *args, **kwargs)
+
+    def error(self, *args, **kwargs):
+        """
+        Level-3, "ERROR": an error that should be addressed. If ignored, the
+        output will contain errors.
+        """
+        return self.system_message(3, *args, **kwargs)
+
+    def severe(self, *args, **kwargs):
+        """
+        Level-4, "SEVERE": a severe error that must be addressed. If ignored,
+        the output will contain severe errors. Typically level-4 system
+        messages are turned into exceptions which halt processing.
+        """
+        return self.system_message(4, *args, **kwargs)
+
+
+class ConditionSet:
+
+    """
+    A set of two thresholds (`report_level` & `halt_level`), a switch
+    (`debug`), and an I/O stream (`stream`), corresponding to one `Reporter`
+    category.
+    """
+
+    def __init__(self, debug, report_level, halt_level, stream):
+        self.debug = debug
+        self.report_level = report_level
+        self.halt_level = halt_level
+        self.stream = stream
+
+    def astuple(self):
+        return (self.debug, self.report_level, self.halt_level,
+                self.stream)
+
+
+class ExtensionOptionError(DataError): pass
+class BadOptionError(ExtensionOptionError): pass
+class BadOptionDataError(ExtensionOptionError): pass
+class DuplicateOptionError(ExtensionOptionError): pass
+
+
+def extract_extension_options(field_list, options_spec):
+    """
+    Return a dictionary mapping extension option names to converted values.
+
+    :Parameters:
+        - `field_list`: A flat field list without field arguments, where each
+          field body consists of a single paragraph only.
+        - `options_spec`: Dictionary mapping known option names to a
+          conversion function such as `int` or `float`.
+
+    :Exceptions:
+        - `KeyError` for unknown option names.
+        - `ValueError` for invalid option values (raised by the conversion
+           function).
+        - `DuplicateOptionError` for duplicate options.
+        - `BadOptionError` for invalid fields.
+        - `BadOptionDataError` for invalid option data (missing name,
+          missing data, bad quotes, etc.).
+    """
+    option_list = extract_options(field_list)
+    option_dict = assemble_option_dict(option_list, options_spec)
+    return option_dict
+
+def extract_options(field_list):
+    """
+    Return a list of option (name, value) pairs from field names & bodies.
+
+    :Parameter:
+        `field_list`: A flat field list, where each field name is a single
+        word and each field body consists of a single paragraph only.
+
+    :Exceptions:
+        - `BadOptionError` for invalid fields.
+        - `BadOptionDataError` for invalid option data (missing name,
+          missing data, bad quotes, etc.).
+    """
+    option_list = []
+    for field in field_list:
+        if len(field[0].astext().split()) != 1:
+            raise BadOptionError(
+                'extension option field name may not contain multiple words')
+        name = str(field[0].astext().lower())
+        body = field[1]
+        if len(body) == 0:
+            data = None
+        elif len(body) > 1 or not isinstance(body[0], nodes.paragraph) \
+              or len(body[0]) != 1 or not isinstance(body[0][0], nodes.Text):
+            raise BadOptionDataError(
+                  'extension option field body may contain\n'
+                  'a single paragraph only (option "%s")' % name)
+        else:
+            data = body[0][0].astext()
+        option_list.append((name, data))
+    return option_list
+
+def assemble_option_dict(option_list, options_spec):
+    """
+    Return a mapping of option names to values.
+
+    :Parameters:
+        - `option_list`: A list of (name, value) pairs (the output of
+          `extract_options()`).
+        - `options_spec`: Dictionary mapping known option names to a
+          conversion function such as `int` or `float`.
+
+    :Exceptions:
+        - `KeyError` for unknown option names.
+        - `DuplicateOptionError` for duplicate options.
+        - `ValueError` for invalid option values (raised by conversion
+           function).
+    """
+    options = {}
+    for name, value in option_list:
+        convertor = options_spec[name]       # raises KeyError if unknown
+        if options.has_key(name):
+            raise DuplicateOptionError('duplicate option "%s"' % name)
+        try:
+            options[name] = convertor(value)
+        except (ValueError, TypeError), detail:
+            raise detail.__class__('(option: "%s"; value: %r)\n%s'
+                                   % (name, value, detail))
+    return options
+
+
+class NameValueError(DataError): pass
+
+
+def extract_name_value(line):
+    """
+    Return a list of (name, value) from a line of the form "name=value ...".
+
+    :Exception:
+        `NameValueError` for invalid input (missing name, missing data, bad
+        quotes, etc.).
+    """
+    attlist = []
+    while line:
+        equals = line.find('=')
+        if equals == -1:
+            raise NameValueError('missing "="')
+        attname = line[:equals].strip()
+        if equals == 0 or not attname:
+            raise NameValueError(
+                  'missing attribute name before "="')
+        line = line[equals+1:].lstrip()
+        if not line:
+            raise NameValueError(
+                  'missing value after "%s="' % attname)
+        if line[0] in '\'"':
+            endquote = line.find(line[0], 1)
+            if endquote == -1:
+                raise NameValueError(
+                      'attribute "%s" missing end quote (%s)'
+                      % (attname, line[0]))
+            if len(line) > endquote + 1 and line[endquote + 1].strip():
+                raise NameValueError(
+                      'attribute "%s" end quote (%s) not followed by '
+                      'whitespace' % (attname, line[0]))
+            data = line[1:endquote]
+            line = line[endquote+1:].lstrip()
+        else:
+            space = line.find(' ')
+            if space == -1:
+                data = line
+                line = ''
+            else:
+                data = line[:space]
+                line = line[space+1:].lstrip()
+        attlist.append((attname.lower(), data))
+    return attlist
+
+def new_document(source, settings=None):
+    """
+    Return a new empty document object.
+
+    :Parameters:
+        `source` : string
+            The path to or description of the source text of the document.
+        `settings` : optparse.Values object
+            Runtime settings.  If none provided, a default set will be used.
+    """
+    if settings is None:
+        settings = frontend.OptionParser().get_default_values()
+    reporter = Reporter(source, settings.report_level, settings.halt_level,
+                        stream=settings.warning_stream, debug=settings.debug,
+                        encoding=settings.error_encoding,
+                        error_handler=settings.error_encoding_error_handler)
+    document = nodes.document(settings, reporter, source=source)
+    document.note_source(source, -1)
+    return document
+
+def clean_rcs_keywords(paragraph, keyword_substitutions):
+    if len(paragraph) == 1 and isinstance(paragraph[0], nodes.Text):
+        textnode = paragraph[0]
+        for pattern, substitution in keyword_substitutions:
+            match = pattern.match(textnode.data)
+            if match:
+                textnode.data = pattern.sub(substitution, textnode.data)
+                return
+
+def relative_path(source, target):
+    """
+    Build and return a path to `target`, relative to `source` (both files).
+
+    If there is no common prefix, return the absolute path to `target`.
+    """
+    source_parts = os.path.abspath(source or 'dummy_file').split(os.sep)
+    target_parts = os.path.abspath(target).split(os.sep)
+    # Check first 2 parts because '/dir'.split('/') == ['', 'dir']:
+    if source_parts[:2] != target_parts[:2]:
+        # Nothing in common between paths.
+        # Return absolute path, using '/' for URLs:
+        return '/'.join(target_parts)
+    source_parts.reverse()
+    target_parts.reverse()
+    while (source_parts and target_parts
+           and source_parts[-1] == target_parts[-1]):
+        # Remove path components in common:
+        source_parts.pop()
+        target_parts.pop()
+    target_parts.reverse()
+    parts = ['..'] * (len(source_parts) - 1) + target_parts
+    return '/'.join(parts)
+
+def get_source_line(node):
+    """
+    Return the "source" and "line" attributes from the `node` given or from
+    its closest ancestor.
+    """
+    while node:
+        if node.source or node.line:
+            return node.source, node.line
+        node = node.parent
+    return None, None

Added: trunk/www/utils/helpers/docutils/docutils/writers/__init__.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/writers/__init__.py       
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/writers/__init__.py       
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,83 @@
+# Authors: David Goodger
+# Contact: address@hidden
+# Revision: $Revision: 1.12 $
+# Date: $Date: 2003/02/24 14:20:01 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+This package contains Docutils Writer modules.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+import sys
+import docutils
+from docutils import languages, Component
+from docutils.transforms import universal
+
+
+class Writer(Component):
+
+    """
+    Abstract base class for docutils Writers.
+
+    Each writer module or package must export a subclass also called 'Writer'.
+    Each writer must support all standard node types listed in
+    `docutils.nodes.node_class_names`.
+
+    Call `write()` to process a document.
+    """
+
+    component_type = 'writer'
+
+    document = None
+    """The document to write."""
+
+    language = None
+    """Language module for the document."""
+
+    destination = None
+    """`docutils.io` IO object; where to write the document."""
+
+    def __init__(self):
+        """Initialize the Writer instance."""
+
+    def write(self, document, destination):
+        self.document = document
+        self.language = languages.get_language(
+            document.settings.language_code)
+        self.destination = destination
+        self.translate()
+        output = self.destination.write(self.output)
+        return output
+
+    def translate(self):
+        """
+        Override to do final document tree translation.
+
+        This is usually done with a `docutils.nodes.NodeVisitor` subclass, in
+        combination with a call to `docutils.nodes.Node.walk()` or
+        `docutils.nodes.Node.walkabout()`.  The ``NodeVisitor`` subclass must
+        support all standard elements (listed in
+        `docutils.nodes.node_class_names`) and possibly non-standard elements
+        used by the current Reader as well.
+        """
+        raise NotImplementedError('subclass must override this method')
+
+
+_writer_aliases = {
+      'html': 'html4css1',
+      'latex': 'latex2e',
+      'pprint': 'pseudoxml',
+      'pformat': 'pseudoxml',
+      'pdf': 'rlpdf',
+      'xml': 'docutils_xml',}
+
+def get_writer_class(writer_name):
+    """Return the Writer class from the `writer_name` module."""
+    writer_name = writer_name.lower()
+    if _writer_aliases.has_key(writer_name):
+        writer_name = _writer_aliases[writer_name]
+    module = __import__(writer_name, globals(), locals())
+    return module.Writer

Added: trunk/www/utils/helpers/docutils/docutils/writers/docutils_xml.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/writers/docutils_xml.py   
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/writers/docutils_xml.py   
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,66 @@
+# Authors: David Goodger
+# Contact: address@hidden
+# Revision: $Revision: 1.6 $
+# Date: $Date: 2002/11/28 03:30:19 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+Simple internal document tree Writer, writes Docutils XML.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+import docutils
+from docutils import writers
+
+
+class Writer(writers.Writer):
+
+    supported = ('xml',)
+    """Formats this writer supports."""
+
+    settings_spec = (
+        '"Docutils XML" Writer Options',
+        'Warning: the --newlines and --indents options may adversely affect '
+        'whitespace; use them only for reading convenience.',
+        (('Generate XML with newlines before and after tags.',
+          ['--newlines'], {'action': 'store_true'}),
+         ('Generate XML with indents and newlines.',
+          ['--indents'], {'action': 'store_true'}),
+         ('Omit the XML declaration.  Use with caution.',
+          ['--no-xml-declaration'], {'dest': 'xml_declaration', 'default': 1,
+                                     'action': 'store_false'}),
+         ('Omit the DOCTYPE declaration.',
+          ['--no-doctype'], {'dest': 'doctype_declaration', 'default': 1,
+                             'action': 'store_false'}),))
+
+    output = None
+    """Final translated form of `document`."""
+
+    xml_declaration = '<?xml version="1.0" encoding="%s"?>\n'
+    #xml_stylesheet = '<?xml-stylesheet type="text/xsl" href="%s"?>\n'
+    doctype = (
+        '<!DOCTYPE document PUBLIC'
+        ' "+//IDN docutils.sourceforge.net//DTD Docutils Generic//EN//XML"'
+        ' "http://docutils.sourceforge.net/spec/docutils.dtd";>\n')
+    generator = '<!-- Generated by Docutils %s -->\n'
+
+    def translate(self):
+        settings = self.document.settings
+        indent = newline = ''
+        if settings.newlines:
+            newline = '\n'
+        if settings.indents:
+            newline = '\n'
+            indent = '    '
+        output_prefix = []
+        if settings.xml_declaration:
+            output_prefix.append(
+                self.xml_declaration % settings.output_encoding)
+        if settings.doctype_declaration:
+            output_prefix.append(self.doctype)
+        output_prefix.append(self.generator % docutils.__version__)
+        docnode = self.document.asdom().childNodes[0]
+        self.output = (''.join(output_prefix)
+                       + docnode.toprettyxml(indent, newline))

Added: trunk/www/utils/helpers/docutils/docutils/writers/html4css1.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/writers/html4css1.py      
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/writers/html4css1.py      
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,1244 @@
+# Author: David Goodger
+# Contact: address@hidden
+# Revision: $Revision: 1.84 $
+# Date: $Date: 2003/06/16 21:29:24 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+Simple HyperText Markup Language document tree Writer.
+
+The output conforms to the HTML 4.01 Transitional DTD and to the Extensible
+HTML version 1.0 Transitional DTD (*almost* strict).  The output contains a
+minimum of formatting information.  A cascading style sheet ("default.css" by
+default) is required for proper viewing with a modern graphical browser.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+import sys
+import os
+import os.path
+import time
+import re
+from types import ListType
+import docutils
+from docutils import nodes, utils, writers, languages
+
+
+class Writer(writers.Writer):
+
+    supported = ('html', 'html4css1', 'xhtml')
+    """Formats this writer supports."""
+
+    settings_spec = (
+        'HTML-Specific Options',
+        None,
+        (('Specify a stylesheet URL, used verbatim.  Default is '
+          '"default.css".  Overridden by --stylesheet-path.',
+          ['--stylesheet'],
+          {'default': 'default.css', 'metavar': '<URL>'}),
+         ('Specify a stylesheet file, relative to the current working '
+          'directory.  The path is adjusted relative to the output HTML '
+          'file.  Overrides --stylesheet.',
+          ['--stylesheet-path'],
+          {'metavar': '<file>'}),
+         ('Link to the stylesheet in the output HTML file.  This is the '
+          'default.',
+          ['--link-stylesheet'],
+          {'dest': 'embed_stylesheet', 'action': 'store_false'}),
+         ('Embed the stylesheet in the output HTML file.  The stylesheet '
+          'file must be accessible during processing (--stylesheet-path is '
+          'recommended).  The stylesheet is embedded inside a comment, so it '
+          'must not contain the text "--" (two hyphens).  Default: link the '
+          'stylesheet, do not embed it.',
+          ['--embed-stylesheet'],
+          {'action': 'store_true'}),
+         ('Format for footnote references: one of "superscript" or '
+          '"brackets".  Default is "superscript".',
+          ['--footnote-references'],
+          {'choices': ['superscript', 'brackets'], 'default': 'superscript',
+           'metavar': '<format>'}),
+         ('Format for block quote attributions: one of "dash" (em-dash '
+          'prefix), "parentheses"/"parens", or "none".  Default is "dash".',
+          ['--attribution'],
+          {'choices': ['dash', 'parentheses', 'parens', 'none'],
+           'default': 'dash', 'metavar': '<format>'}),
+         ('Remove extra vertical whitespace between items of bullet lists '
+          'and enumerated lists, when list items are "simple" (i.e., all '
+          'items each contain one paragraph and/or one "simple" sublist '
+          'only).  Default: enabled.',
+          ['--compact-lists'],
+          {'default': 1, 'action': 'store_true'}),
+         ('Disable compact simple bullet and enumerated lists.',
+          ['--no-compact-lists'],
+          {'dest': 'compact_lists', 'action': 'store_false'}),
+         ('Omit the XML declaration.  Use with caution.',
+          ['--no-xml-declaration'], {'dest': 'xml_declaration', 'default': 1,
+                                     'action': 'store_false'}),))
+
+    relative_path_settings = ('stylesheet_path',)
+
+    output = None
+    """Final translated form of `document`."""
+
+    def __init__(self):
+        writers.Writer.__init__(self)
+        self.translator_class = HTMLTranslator
+
+    def translate(self):
+        visitor = self.translator_class(self.document)
+        self.document.walkabout(visitor)
+        self.output = visitor.astext()
+        self.head_prefix = visitor.head_prefix
+        self.stylesheet = visitor.stylesheet
+        self.head = visitor.head
+        self.body_prefix = visitor.body_prefix
+        self.body_pre_docinfo = visitor.body_pre_docinfo
+        self.docinfo = visitor.docinfo
+        self.body = visitor.body
+        self.body_suffix = visitor.body_suffix
+
+
+class HTMLTranslator(nodes.NodeVisitor):
+
+    """
+    This HTML writer has been optimized to produce visually compact
+    lists (less vertical whitespace).  HTML's mixed content models
+    allow list items to contain "<li><p>body elements</p></li>" or
+    "<li>just text</li>" or even "<li>text<p>and body
+    elements</p>combined</li>", each with different effects.  It would
+    be best to stick with strict body elements in list items, but they
+    affect vertical spacing in browsers (although they really
+    shouldn't).
+
+    Here is an outline of the optimization:
+
+    - Check for and omit <p> tags in "simple" lists: list items
+      contain either a single paragraph, a nested simple list, or a
+      paragraph followed by a nested simple list.  This means that
+      this list can be compact:
+
+          - Item 1.
+          - Item 2.
+
+      But this list cannot be compact:
+
+          - Item 1.
+
+            This second paragraph forces space between list items.
+
+          - Item 2.
+
+    - In non-list contexts, omit <p> tags on a paragraph if that
+      paragraph is the only child of its parent (footnotes & citations
+      are allowed a label first).
+
+    - Regardless of the above, in definitions, table cells, field bodies,
+      option descriptions, and list items, mark the first child with
+      'class="first"' and the last child with 'class="last"'.  The stylesheet
+      sets the margins (top & bottom respecively) to 0 for these elements.
+
+    The ``no_compact_lists`` setting (``--no-compact-lists`` command-line
+    option) disables list whitespace optimization.
+    """
+
+    xml_declaration = '<?xml version="1.0" encoding="%s" ?>\n'
+    doctype = ('<!DOCTYPE html' 
+               ' PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"'
+               ' "http://www.w3.org/TR/xhtml1/DTD/'
+               'xhtml1-transitional.dtd">\n')
+    html_head = ('<html xmlns="http://www.w3.org/1999/xhtml"; xml:lang="%s" '
+                 'lang="%s">\n<head>\n')
+    content_type = ('<meta http-equiv="Content-Type" content="text/html; '
+                    'charset=%s" />\n')
+    generator = ('<meta name="generator" content="Docutils %s: '
+                 'http://docutils.sourceforge.net/"; />\n')
+    stylesheet_link = '<link rel="stylesheet" href="%s" type="text/css" />\n'
+    embedded_stylesheet = '<style type="text/css"><!--\n\n%s\n--></style>\n'
+    named_tags = {'a': 1, 'applet': 1, 'form': 1, 'frame': 1, 'iframe': 1,
+                  'img': 1, 'map': 1}
+    words_and_spaces = re.compile(r'\S+| +|\n')
+
+    def __init__(self, document):
+        nodes.NodeVisitor.__init__(self, document)
+        self.settings = settings = document.settings
+        lcode = settings.language_code
+        self.language = languages.get_language(lcode)
+        self.head_prefix = [
+              self.doctype,
+              self.html_head % (lcode, lcode),
+              self.content_type % settings.output_encoding,
+              self.generator % docutils.__version__]
+        if settings.xml_declaration:
+            self.head_prefix.insert(0, self.xml_declaration
+                                    % settings.output_encoding)
+        self.head = []
+        if settings.embed_stylesheet:
+            stylesheet = self.get_stylesheet_reference(
+                os.path.join(os.getcwd(), 'dummy'))
+            stylesheet_text = open(stylesheet).read()
+            self.stylesheet = [self.embedded_stylesheet % stylesheet_text]
+        else:
+            stylesheet = self.get_stylesheet_reference()
+            if stylesheet:
+                self.stylesheet = [self.stylesheet_link % stylesheet]
+            else:
+                self.stylesheet = []
+        self.body_prefix = ['</head>\n<body>\n']
+        self.body_pre_docinfo = []
+        self.docinfo = []
+        self.body = []
+        self.body_suffix = ['</body>\n</html>\n']
+        self.section_level = 0
+        self.context = []
+        self.topic_class = ''
+        self.colspecs = []
+        self.compact_p = 1
+        self.compact_simple = None
+        self.in_docinfo = None
+        self.in_sidebar = None
+
+    def get_stylesheet_reference(self, relative_to=None):
+        settings = self.settings
+        if settings.stylesheet_path:
+            if relative_to == None:
+                relative_to = settings._destination
+            return utils.relative_path(relative_to, settings.stylesheet_path)
+        else:
+            return settings.stylesheet
+
+    def astext(self):
+        return ''.join(self.head_prefix + self.head
+                       + self.stylesheet + self.body_prefix
+                       + self.body_pre_docinfo + self.docinfo
+                       + self.body + self.body_suffix)
+
+    def encode(self, text):
+        """Encode special characters in `text` & return."""
+        # @@@ A codec to do these and all other HTML entities would be nice.
+        text = text.replace("&", "&amp;")
+        text = text.replace("<", "&lt;")
+        text = text.replace('"', "&quot;")
+        text = text.replace(">", "&gt;")
+        text = text.replace("@", "&#64;") # may thwart some address harvesters
+        return text
+
+    def attval(self, text,
+               whitespace=re.compile('[\n\r\t\v\f]')):
+        """Cleanse, HTML encode, and return attribute value text."""
+        return self.encode(whitespace.sub(' ', text))
+
+    def starttag(self, node, tagname, suffix='\n', infix='', **attributes):
+        """
+        Construct and return a start tag given a node (id & class attributes
+        are extracted), tag name, and optional attributes.
+        """
+        tagname = tagname.lower()
+        atts = {}
+        for (name, value) in attributes.items():
+            atts[name.lower()] = value
+        for att in ('class',):          # append to node attribute
+            if node.has_key(att) or atts.has_key(att):
+                atts[att] = \
+                      (node.get(att, '') + ' ' + atts.get(att, '')).strip()
+        for att in ('id',):             # node attribute overrides
+            if node.has_key(att):
+                atts[att] = node[att]
+        if atts.has_key('id') and self.named_tags.has_key(tagname):
+            atts['name'] = atts['id']   # for compatibility with old browsers
+        attlist = atts.items()
+        attlist.sort()
+        parts = [tagname]
+        for name, value in attlist:
+            if value is None:           # boolean attribute
+                # According to the HTML spec, ``<element boolean>`` is good,
+                # ``<element boolean="boolean">`` is bad.
+                # (But the XHTML (XML) spec says the opposite.  <sigh>)
+                parts.append(name.lower())
+            elif isinstance(value, ListType):
+                values = [str(v) for v in value]
+                parts.append('%s="%s"' % (name.lower(),
+                                          self.attval(' '.join(values))))
+            else:
+                parts.append('%s="%s"' % (name.lower(),
+                                          self.attval(str(value))))
+        return '<%s%s>%s' % (' '.join(parts), infix, suffix)
+
+    def emptytag(self, node, tagname, suffix='\n', **attributes):
+        """Construct and return an XML-compatible empty tag."""
+        return self.starttag(node, tagname, suffix, infix=' /', **attributes)
+
+    def visit_Text(self, node):
+        self.body.append(self.encode(node.astext()))
+
+    def depart_Text(self, node):
+        pass
+
+    def visit_abbreviation(self, node):
+        # @@@ implementation incomplete ("title" attribute)
+        self.body.append(self.starttag(node, 'abbr', ''))
+
+    def depart_abbreviation(self, node):
+        self.body.append('</abbr>')
+
+    def visit_acronym(self, node):
+        # @@@ implementation incomplete ("title" attribute)
+        self.body.append(self.starttag(node, 'acronym', ''))
+
+    def depart_acronym(self, node):
+        self.body.append('</acronym>')
+
+    def visit_address(self, node):
+        self.visit_docinfo_item(node, 'address', meta=None)
+        self.body.append(self.starttag(node, 'pre', CLASS='address'))
+
+    def depart_address(self, node):
+        self.body.append('\n</pre>\n')
+        self.depart_docinfo_item()
+
+    def visit_admonition(self, node, name=''):
+        self.body.append(self.starttag(node, 'div',
+                                        CLASS=(name or 'admonition')))
+        if name:
+            self.body.append('<p class="admonition-title">'
+                             + self.language.labels[name] + '</p>\n')
+
+    def depart_admonition(self, node=None):
+        self.body.append('</div>\n')
+
+    def visit_attention(self, node):
+        self.visit_admonition(node, 'attention')
+
+    def depart_attention(self, node):
+        self.depart_admonition()
+
+    attribution_formats = {'dash': ('&mdash;', ''),
+                           'parentheses': ('(', ')'),
+                           'parens': ('(', ')'),
+                           'none': ('', '')}
+
+    def visit_attribution(self, node):
+        prefix, suffix = self.attribution_formats[self.settings.attribution]
+        self.context.append(suffix)
+        self.body.append(
+            self.starttag(node, 'p', prefix, CLASS='attribution'))
+
+    def depart_attribution(self, node):
+        self.body.append(self.context.pop() + '</p>\n')
+
+    def visit_author(self, node):
+        self.visit_docinfo_item(node, 'author')
+
+    def depart_author(self, node):
+        self.depart_docinfo_item()
+
+    def visit_authors(self, node):
+        pass
+
+    def depart_authors(self, node):
+        pass
+
+    def visit_block_quote(self, node):
+        self.body.append(self.starttag(node, 'blockquote'))
+
+    def depart_block_quote(self, node):
+        self.body.append('</blockquote>\n')
+
+    def check_simple_list(self, node):
+        """Check for a simple list that can be rendered compactly."""
+        visitor = SimpleListChecker(self.document)
+        try:
+            node.walk(visitor)
+        except nodes.NodeFound:
+            return None
+        else:
+            return 1
+
+    def visit_bullet_list(self, node):
+        atts = {}
+        old_compact_simple = self.compact_simple
+        self.context.append((self.compact_simple, self.compact_p))
+        self.compact_p = None
+        self.compact_simple = (self.settings.compact_lists and
+                               (self.compact_simple
+                                or self.topic_class == 'contents'
+                                or self.check_simple_list(node)))
+        if self.compact_simple and not old_compact_simple:
+            atts['class'] = 'simple'
+        self.body.append(self.starttag(node, 'ul', **atts))
+
+    def depart_bullet_list(self, node):
+        self.compact_simple, self.compact_p = self.context.pop()
+        self.body.append('</ul>\n')
+
+    def visit_caption(self, node):
+        self.body.append(self.starttag(node, 'p', '', CLASS='caption'))
+
+    def depart_caption(self, node):
+        self.body.append('</p>\n')
+
+    def visit_caution(self, node):
+        self.visit_admonition(node, 'caution')
+
+    def depart_caution(self, node):
+        self.depart_admonition()
+
+    def visit_citation(self, node):
+        self.body.append(self.starttag(node, 'table', CLASS='citation',
+                                       frame="void", rules="none"))
+        self.body.append('<colgroup><col class="label" /><col /></colgroup>\n'
+                         '<col />\n'
+                         '<tbody valign="top">\n'
+                         '<tr>')
+        self.footnote_backrefs(node)
+
+    def depart_citation(self, node):
+        self.body.append('</td></tr>\n'
+                         '</tbody>\n</table>\n')
+
+    def visit_citation_reference(self, node):
+        href = ''
+        if node.has_key('refid'):
+            href = '#' + node['refid']
+        elif node.has_key('refname'):
+            href = '#' + self.document.nameids[node['refname']]
+        self.body.append(self.starttag(node, 'a', '[', href=href,
+                                       CLASS='citation-reference'))
+
+    def depart_citation_reference(self, node):
+        self.body.append(']</a>')
+
+    def visit_classifier(self, node):
+        self.body.append(' <span class="classifier-delimiter">:</span> ')
+        self.body.append(self.starttag(node, 'span', '', CLASS='classifier'))
+
+    def depart_classifier(self, node):
+        self.body.append('</span>')
+
+    def visit_colspec(self, node):
+        self.colspecs.append(node)
+
+    def depart_colspec(self, node):
+        pass
+
+    def write_colspecs(self):
+        width = 0
+        for node in self.colspecs:
+            width += node['colwidth']
+        for node in self.colspecs:
+            colwidth = int(node['colwidth'] * 100.0 / width + 0.5)
+            self.body.append(self.emptytag(node, 'col',
+                                           width='%i%%' % colwidth))
+        self.colspecs = []
+
+    def visit_comment(self, node,
+                      sub=re.compile('-(?=-)').sub):
+        """Escape double-dashes in comment text."""
+        self.body.append('<!-- %s -->\n' % sub('- ', node.astext()))
+        # Content already processed:
+        raise nodes.SkipNode
+
+    def visit_contact(self, node):
+        self.visit_docinfo_item(node, 'contact', meta=None)
+
+    def depart_contact(self, node):
+        self.depart_docinfo_item()
+
+    def visit_copyright(self, node):
+        self.visit_docinfo_item(node, 'copyright')
+
+    def depart_copyright(self, node):
+        self.depart_docinfo_item()
+
+    def visit_danger(self, node):
+        self.visit_admonition(node, 'danger')
+
+    def depart_danger(self, node):
+        self.depart_admonition()
+
+    def visit_date(self, node):
+        self.visit_docinfo_item(node, 'date')
+
+    def depart_date(self, node):
+        self.depart_docinfo_item()
+
+    def visit_decoration(self, node):
+        pass
+
+    def depart_decoration(self, node):
+        pass
+
+    def visit_definition(self, node):
+        self.body.append('</dt>\n')
+        self.body.append(self.starttag(node, 'dd', ''))
+        if len(node):
+            node[0].set_class('first')
+            node[-1].set_class('last')
+
+    def depart_definition(self, node):
+        self.body.append('</dd>\n')
+
+    def visit_definition_list(self, node):
+        self.body.append(self.starttag(node, 'dl'))
+
+    def depart_definition_list(self, node):
+        self.body.append('</dl>\n')
+
+    def visit_definition_list_item(self, node):
+        pass
+
+    def depart_definition_list_item(self, node):
+        pass
+
+    def visit_description(self, node):
+        self.body.append(self.starttag(node, 'td', ''))
+        if len(node):
+            node[0].set_class('first')
+            node[-1].set_class('last')
+
+    def depart_description(self, node):
+        self.body.append('</td>')
+
+    def visit_docinfo(self, node):
+        self.context.append(len(self.body))
+        self.body.append(self.starttag(node, 'table', CLASS='docinfo',
+                                       frame="void", rules="none"))
+        self.body.append('<col class="docinfo-name" />\n'
+                         '<col class="docinfo-content" />\n'
+                         '<tbody valign="top">\n')
+        self.in_docinfo = 1
+
+    def depart_docinfo(self, node):
+        self.body.append('</tbody>\n</table>\n')
+        self.in_docinfo = None
+        start = self.context.pop()
+        self.body_pre_docinfo = self.body[:start]
+        self.docinfo = self.body[start:]
+        self.body = []
+
+    def visit_docinfo_item(self, node, name, meta=1):
+        if meta:
+            self.head.append('<meta name="%s" content="%s" />\n'
+                             % (name, self.attval(node.astext())))
+        self.body.append(self.starttag(node, 'tr', ''))
+        self.body.append('<th class="docinfo-name">%s:</th>\n<td>'
+                         % self.language.labels[name])
+        if len(node):
+            if isinstance(node[0], nodes.Element):
+                node[0].set_class('first')
+            if isinstance(node[0], nodes.Element):
+                node[-1].set_class('last')
+
+    def depart_docinfo_item(self):
+        self.body.append('</td></tr>\n')
+
+    def visit_doctest_block(self, node):
+        self.body.append(self.starttag(node, 'pre', CLASS='doctest-block'))
+
+    def depart_doctest_block(self, node):
+        self.body.append('\n</pre>\n')
+
+    def visit_document(self, node):
+        self.body.append(self.starttag(node, 'div', CLASS='document'))
+
+    def depart_document(self, node):
+        self.body.append('</div>\n')
+
+    def visit_emphasis(self, node):
+        self.body.append('<em>')
+
+    def depart_emphasis(self, node):
+        self.body.append('</em>')
+
+    def visit_entry(self, node):
+        if isinstance(node.parent.parent, nodes.thead):
+            tagname = 'th'
+        else:
+            tagname = 'td'
+        atts = {}
+        if node.has_key('morerows'):
+            atts['rowspan'] = node['morerows'] + 1
+        if node.has_key('morecols'):
+            atts['colspan'] = node['morecols'] + 1
+        self.body.append(self.starttag(node, tagname, '', **atts))
+        self.context.append('</%s>\n' % tagname.lower())
+        if len(node) == 0:              # empty cell
+            self.body.append('&nbsp;')
+        else:
+            node[0].set_class('first')
+            node[-1].set_class('last')
+
+    def depart_entry(self, node):
+        self.body.append(self.context.pop())
+
+    def visit_enumerated_list(self, node):
+        """
+        The 'start' attribute does not conform to HTML 4.01's strict.dtd, but
+        CSS1 doesn't help. CSS2 isn't widely enough supported yet to be
+        usable.
+        """
+        atts = {}
+        if node.has_key('start'):
+            atts['start'] = node['start']
+        if node.has_key('enumtype'):
+            atts['class'] = node['enumtype']
+        # @@@ To do: prefix, suffix. How? Change prefix/suffix to a
+        # single "format" attribute? Use CSS2?
+        old_compact_simple = self.compact_simple
+        self.context.append((self.compact_simple, self.compact_p))
+        self.compact_p = None
+        self.compact_simple = (self.settings.compact_lists and
+                               (self.compact_simple
+                                or self.topic_class == 'contents'
+                                or self.check_simple_list(node)))
+        if self.compact_simple and not old_compact_simple:
+            atts['class'] = (atts.get('class', '') + ' simple').strip()
+        self.body.append(self.starttag(node, 'ol', **atts))
+
+    def depart_enumerated_list(self, node):
+        self.compact_simple, self.compact_p = self.context.pop()
+        self.body.append('</ol>\n')
+
+    def visit_error(self, node):
+        self.visit_admonition(node, 'error')
+
+    def depart_error(self, node):
+        self.depart_admonition()
+
+    def visit_field(self, node):
+        self.body.append(self.starttag(node, 'tr', '', CLASS='field'))
+
+    def depart_field(self, node):
+        self.body.append('</tr>\n')
+
+    def visit_field_body(self, node):
+        self.body.append(self.starttag(node, 'td', '', CLASS='field-body'))
+        if len(node):
+            node[0].set_class('first')
+            node[-1].set_class('last')
+
+    def depart_field_body(self, node):
+        self.body.append('</td>\n')
+
+    def visit_field_list(self, node):
+        self.body.append(self.starttag(node, 'table', frame='void',
+                                       rules='none', CLASS='field-list'))
+        self.body.append('<col class="field-name" />\n'
+                         '<col class="field-body" />\n'
+                         '<tbody valign="top">\n')
+
+    def depart_field_list(self, node):
+        self.body.append('</tbody>\n</table>\n')
+
+    def visit_field_name(self, node):
+        atts = {}
+        if self.in_docinfo:
+            atts['class'] = 'docinfo-name'
+        else:
+            atts['class'] = 'field-name'
+        if len(node.astext()) > 14:
+            atts['colspan'] = 2
+            self.context.append('</tr>\n<tr><td>&nbsp;</td>')
+        else:
+            self.context.append('')
+        self.body.append(self.starttag(node, 'th', '', **atts))
+
+    def depart_field_name(self, node):
+        self.body.append(':</th>')
+        self.body.append(self.context.pop())
+
+    def visit_figure(self, node):
+        atts = {'class': 'figure'}
+        if node.get('width'):
+            atts['style'] = 'width: %spx' % node['width']
+        self.body.append(self.starttag(node, 'div', **atts))
+
+    def depart_figure(self, node):
+        self.body.append('</div>\n')
+
+    def visit_footer(self, node):
+        self.context.append(len(self.body))
+
+    def depart_footer(self, node):
+        start = self.context.pop()
+        footer = (['<hr class="footer"/>\n',
+                   self.starttag(node, 'div', CLASS='footer')]
+                  + self.body[start:] + ['</div>\n'])
+        self.body_suffix[:0] = footer
+        del self.body[start:]
+
+    def visit_footnote(self, node):
+        self.body.append(self.starttag(node, 'table', CLASS='footnote',
+                                       frame="void", rules="none"))
+        self.body.append('<colgroup><col class="label" /><col /></colgroup>\n'
+                         '<tbody valign="top">\n'
+                         '<tr>')
+        self.footnote_backrefs(node)
+
+    def footnote_backrefs(self, node):
+        if self.settings.footnote_backlinks and node.hasattr('backrefs'):
+            backrefs = node['backrefs']
+            if len(backrefs) == 1:
+                self.context.append('')
+                self.context.append('<a class="fn-backref" href="#%s" '
+                                    'name="%s">' % (backrefs[0], node['id']))
+            else:
+                i = 1
+                backlinks = []
+                for backref in backrefs:
+                    backlinks.append('<a class="fn-backref" href="#%s">%s</a>'
+                                     % (backref, i))
+                    i += 1
+                self.context.append('<em>(%s)</em> ' % ', '.join(backlinks))
+                self.context.append('<a name="%s">' % node['id'])
+        else:
+            self.context.append('')
+            self.context.append('<a name="%s">' % node['id'])
+
+    def depart_footnote(self, node):
+        self.body.append('</td></tr>\n'
+                         '</tbody>\n</table>\n')
+
+    def visit_footnote_reference(self, node):
+        href = ''
+        if node.has_key('refid'):
+            href = '#' + node['refid']
+        elif node.has_key('refname'):
+            href = '#' + self.document.nameids[node['refname']]
+        format = self.settings.footnote_references
+        if format == 'brackets':
+            suffix = '['
+            self.context.append(']')
+        elif format == 'superscript':
+            suffix = '<sup>'
+            self.context.append('</sup>')
+        else:                           # shouldn't happen
+            suffix = '???'
+            self.content.append('???')
+        self.body.append(self.starttag(node, 'a', suffix, href=href,
+                                       CLASS='footnote-reference'))
+
+    def depart_footnote_reference(self, node):
+        self.body.append(self.context.pop() + '</a>')
+
+    def visit_generated(self, node):
+        pass
+
+    def depart_generated(self, node):
+        pass
+
+    def visit_header(self, node):
+        self.context.append(len(self.body))
+
+    def depart_header(self, node):
+        start = self.context.pop()
+        self.body_prefix.append(self.starttag(node, 'div', CLASS='header'))
+        self.body_prefix.extend(self.body[start:])
+        self.body_prefix.append('<hr />\n</div>\n')
+        del self.body[start:]
+
+    def visit_hint(self, node):
+        self.visit_admonition(node, 'hint')
+
+    def depart_hint(self, node):
+        self.depart_admonition()
+
+    def visit_image(self, node):
+        atts = node.attributes.copy()
+        atts['src'] = atts['uri']
+        del atts['uri']
+        if not atts.has_key('alt'):
+            atts['alt'] = atts['src']
+        if isinstance(node.parent, nodes.TextElement):
+            self.context.append('')
+        else:
+            self.body.append('<p>')
+            self.context.append('</p>\n')
+        self.body.append(self.emptytag(node, 'img', '', **atts))
+
+    def depart_image(self, node):
+        self.body.append(self.context.pop())
+
+    def visit_important(self, node):
+        self.visit_admonition(node, 'important')
+
+    def depart_important(self, node):
+        self.depart_admonition()
+
+    def visit_inline(self, node):
+        self.body.append(self.starttag(node, 'span', ''))
+
+    def depart_inline(self, node):
+        self.body.append('</span>')
+
+    def visit_label(self, node):
+        self.body.append(self.starttag(node, 'td', '%s[' % self.context.pop(),
+                                       CLASS='label'))
+
+    def depart_label(self, node):
+        self.body.append(']</a></td><td>%s' % self.context.pop())
+
+    def visit_legend(self, node):
+        self.body.append(self.starttag(node, 'div', CLASS='legend'))
+
+    def depart_legend(self, node):
+        self.body.append('</div>\n')
+
+    def visit_line_block(self, node):
+        self.body.append(self.starttag(node, 'pre', CLASS='line-block'))
+
+    def depart_line_block(self, node):
+        self.body.append('\n</pre>\n')
+
+    def visit_list_item(self, node):
+        self.body.append(self.starttag(node, 'li', ''))
+        if len(node):
+            node[0].set_class('first')
+
+    def depart_list_item(self, node):
+        self.body.append('</li>\n')
+
+    def visit_literal(self, node):
+        """Process text to prevent tokens from wrapping."""
+        self.body.append(self.starttag(node, 'tt', '', CLASS='literal'))
+        text = node.astext()
+        for token in self.words_and_spaces.findall(text):
+            if token.strip():
+                # Protect text like "--an-option" from bad line wrapping:
+                self.body.append('<span class="pre">%s</span>'
+                                 % self.encode(token))
+            elif token in ('\n', ' '):
+                # Allow breaks at whitespace:
+                self.body.append(token)
+            else:
+                # Protect runs of multiple spaces; the last space can wrap:
+                self.body.append('&nbsp;' * (len(token) - 1) + ' ')
+        self.body.append('</tt>')
+        # Content already processed:
+        raise nodes.SkipNode
+
+    def visit_literal_block(self, node):
+        self.body.append(self.starttag(node, 'pre', CLASS='literal-block'))
+
+    def depart_literal_block(self, node):
+        self.body.append('\n</pre>\n')
+
+    def visit_meta(self, node):
+        self.head.append(self.emptytag(node, 'meta', **node.attributes))
+
+    def depart_meta(self, node):
+        pass
+
+    def visit_note(self, node):
+        self.visit_admonition(node, 'note')
+
+    def depart_note(self, node):
+        self.depart_admonition()
+
+    def visit_option(self, node):
+        if self.context[-1]:
+            self.body.append(', ')
+
+    def depart_option(self, node):
+        self.context[-1] += 1
+
+    def visit_option_argument(self, node):
+        self.body.append(node.get('delimiter', ' '))
+        self.body.append(self.starttag(node, 'var', ''))
+
+    def depart_option_argument(self, node):
+        self.body.append('</var>')
+
+    def visit_option_group(self, node):
+        atts = {}
+        if len(node.astext()) > 14:
+            atts['colspan'] = 2
+            self.context.append('</tr>\n<tr><td>&nbsp;</td>')
+        else:
+            self.context.append('')
+        self.body.append(self.starttag(node, 'td', **atts))
+        self.body.append('<kbd>')
+        self.context.append(0)          # count number of options
+
+    def depart_option_group(self, node):
+        self.context.pop()
+        self.body.append('</kbd></td>\n')
+        self.body.append(self.context.pop())
+
+    def visit_option_list(self, node):
+        self.body.append(
+              self.starttag(node, 'table', CLASS='option-list',
+                            frame="void", rules="none"))
+        self.body.append('<col class="option" />\n'
+                         '<col class="description" />\n'
+                         '<tbody valign="top">\n')
+
+    def depart_option_list(self, node):
+        self.body.append('</tbody>\n</table>\n')
+
+    def visit_option_list_item(self, node):
+        self.body.append(self.starttag(node, 'tr', ''))
+
+    def depart_option_list_item(self, node):
+        self.body.append('</tr>\n')
+
+    def visit_option_string(self, node):
+        self.body.append(self.starttag(node, 'span', '', CLASS='option'))
+
+    def depart_option_string(self, node):
+        self.body.append('</span>')
+
+    def visit_organization(self, node):
+        self.visit_docinfo_item(node, 'organization')
+
+    def depart_organization(self, node):
+        self.depart_docinfo_item()
+
+    def visit_paragraph(self, node):
+        # Omit <p> tags if this is an only child and optimizable.
+        if (self.compact_simple or
+            self.compact_p and (len(node.parent) == 1 or
+                                len(node.parent) == 2 and
+                                isinstance(node.parent[0], nodes.label))):
+            self.context.append('')
+        else:
+            self.body.append(self.starttag(node, 'p', ''))
+            self.context.append('</p>\n')
+
+    def depart_paragraph(self, node):
+        self.body.append(self.context.pop())
+
+    def visit_problematic(self, node):
+        if node.hasattr('refid'):
+            self.body.append('<a href="#%s" name="%s">' % (node['refid'],
+                                                           node['id']))
+            self.context.append('</a>')
+        else:
+            self.context.append('')
+        self.body.append(self.starttag(node, 'span', '', CLASS='problematic'))
+
+    def depart_problematic(self, node):
+        self.body.append('</span>')
+        self.body.append(self.context.pop())
+
+    def visit_raw(self, node):
+        if node.get('format') == 'html':
+            self.body.append(node.astext())
+        # Keep non-HTML raw text out of output:
+        raise nodes.SkipNode
+
+    def visit_reference(self, node):
+        if node.has_key('refuri'):
+            href = node['refuri']
+        elif node.has_key('refid'):
+            href = '#' + node['refid']
+        elif node.has_key('refname'):
+            href = '#' + self.document.nameids[node['refname']]
+        self.body.append(self.starttag(node, 'a', '', href=href,
+                                       CLASS='reference'))
+
+    def depart_reference(self, node):
+        self.body.append('</a>')
+
+    def visit_revision(self, node):
+        self.visit_docinfo_item(node, 'revision', meta=None)
+
+    def depart_revision(self, node):
+        self.depart_docinfo_item()
+
+    def visit_row(self, node):
+        self.body.append(self.starttag(node, 'tr', ''))
+
+    def depart_row(self, node):
+        self.body.append('</tr>\n')
+
+    def visit_rubric(self, node):
+        self.body.append(self.starttag(node, 'p', '', CLASS='rubric'))
+
+    def depart_rubric(self, node):
+        self.body.append('</p>\n')
+
+    def visit_section(self, node):
+        self.section_level += 1
+        self.body.append(self.starttag(node, 'div', CLASS='section'))
+
+    def depart_section(self, node):
+        self.section_level -= 1
+        self.body.append('</div>\n')
+
+    def visit_sidebar(self, node):
+        self.body.append(self.starttag(node, 'div', CLASS='sidebar'))
+        self.in_sidebar = 1
+
+    def depart_sidebar(self, node):
+        self.body.append('</div>\n')
+        self.in_sidebar = None
+
+    def visit_status(self, node):
+        self.visit_docinfo_item(node, 'status', meta=None)
+
+    def depart_status(self, node):
+        self.depart_docinfo_item()
+
+    def visit_strong(self, node):
+        self.body.append('<strong>')
+
+    def depart_strong(self, node):
+        self.body.append('</strong>')
+
+    def visit_subscript(self, node):
+        self.body.append(self.starttag(node, 'sub', ''))
+
+    def depart_subscript(self, node):
+        self.body.append('</sub>')
+
+    def visit_substitution_definition(self, node):
+        """Internal only."""
+        raise nodes.SkipNode
+
+    def visit_substitution_reference(self, node):
+        self.unimplemented_visit(node)
+
+    def visit_subtitle(self, node):
+        if isinstance(node.parent, nodes.sidebar):
+            self.body.append(self.starttag(node, 'p', '',
+                                           CLASS='sidebar-subtitle'))
+            self.context.append('</p>\n')
+        else:
+            self.body.append(self.starttag(node, 'h2', '', CLASS='subtitle'))
+            self.context.append('</h2>\n')
+
+    def depart_subtitle(self, node):
+        self.body.append(self.context.pop())
+
+    def visit_superscript(self, node):
+        self.body.append(self.starttag(node, 'sup', ''))
+
+    def depart_superscript(self, node):
+        self.body.append('</sup>')
+
+    def visit_system_message(self, node):
+        if node['level'] < self.document.reporter['writer'].report_level:
+            # Level is too low to display:
+            raise nodes.SkipNode
+        self.body.append(self.starttag(node, 'div', CLASS='system-message'))
+        self.body.append('<p class="system-message-title">')
+        attr = {}
+        backref_text = ''
+        if node.hasattr('id'):
+            attr['name'] = node['id']
+        if node.hasattr('backrefs'):
+            backrefs = node['backrefs']
+            if len(backrefs) == 1:
+                backref_text = ('; <em><a href="#%s">backlink</a></em>'
+                                % backrefs[0])
+            else:
+                i = 1
+                backlinks = []
+                for backref in backrefs:
+                    backlinks.append('<a href="#%s">%s</a>' % (backref, i))
+                    i += 1
+                backref_text = ('; <em>backlinks: %s</em>'
+                                % ', '.join(backlinks))
+        if node.hasattr('line'):
+            line = ', line %s' % node['line']
+        else:
+            line = ''
+        if attr:
+            a_start = self.starttag({}, 'a', '', **attr)
+            a_end = '</a>'
+        else:
+            a_start = a_end = ''
+        self.body.append('System Message: %s%s/%s%s (<tt>%s</tt>%s)%s</p>\n'
+                         % (a_start, node['type'], node['level'], a_end,
+                            self.encode(node['source']), line, backref_text))
+
+    def depart_system_message(self, node):
+        self.body.append('</div>\n')
+
+    def visit_table(self, node):
+        self.body.append(
+              self.starttag(node, 'table', CLASS="table",
+                            frame='border', rules='all'))
+
+    def depart_table(self, node):
+        self.body.append('</table>\n')
+
+    def visit_target(self, node):
+        if not (node.has_key('refuri') or node.has_key('refid')
+                or node.has_key('refname')):
+            self.body.append(self.starttag(node, 'a', '', CLASS='target'))
+            self.context.append('</a>')
+        else:
+            self.context.append('')
+
+    def depart_target(self, node):
+        self.body.append(self.context.pop())
+
+    def visit_tbody(self, node):
+        self.write_colspecs()
+        self.body.append(self.context.pop()) # '</colgroup>\n' or ''
+        self.body.append(self.starttag(node, 'tbody', valign='top'))
+
+    def depart_tbody(self, node):
+        self.body.append('</tbody>\n')
+
+    def visit_term(self, node):
+        self.body.append(self.starttag(node, 'dt', ''))
+
+    def depart_term(self, node):
+        """
+        Leave the end tag to `self.visit_definition()`, in case there's a
+        classifier.
+        """
+        pass
+
+    def visit_tgroup(self, node):
+        # Mozilla needs <colgroup>:
+        self.body.append(self.starttag(node, 'colgroup'))
+        # Appended by thead or tbody:
+        self.context.append('</colgroup>\n')
+
+    def depart_tgroup(self, node):
+        pass
+
+    def visit_thead(self, node):
+        self.write_colspecs()
+        self.body.append(self.context.pop()) # '</colgroup>\n'
+        # There may or may not be a <thead>; this is for <tbody> to use:
+        self.context.append('')
+        self.body.append(self.starttag(node, 'thead', valign='bottom'))
+
+    def depart_thead(self, node):
+        self.body.append('</thead>\n')
+
+    def visit_tip(self, node):
+        self.visit_admonition(node, 'tip')
+
+    def depart_tip(self, node):
+        self.depart_admonition()
+
+    def visit_title(self, node):
+        """Only 6 section levels are supported by HTML."""
+        check_id = 0
+        if isinstance(node.parent, nodes.topic):
+            self.body.append(
+                  self.starttag(node, 'p', '', CLASS='topic-title'))
+            check_id = 1
+        elif isinstance(node.parent, nodes.sidebar):
+            self.body.append(
+                  self.starttag(node, 'p', '', CLASS='sidebar-title'))
+            check_id = 1
+        elif isinstance(node.parent, nodes.admonition):
+            self.body.append(
+                  self.starttag(node, 'p', '', CLASS='admonition-title'))
+            check_id = 1
+        elif self.section_level == 0:
+            # document title
+            self.head.append('<title>%s</title>\n'
+                             % self.encode(node.astext()))
+            self.body.append(self.starttag(node, 'h1', '', CLASS='title'))
+            self.context.append('</h1>\n')
+        else:
+            self.body.append(
+                  self.starttag(node, 'h%s' % self.section_level, ''))
+            atts = {}
+            if node.parent.hasattr('id'):
+                atts['name'] = node.parent['id']
+            if node.hasattr('refid'):
+                atts['class'] = 'toc-backref'
+                atts['href'] = '#' + node['refid']
+            self.body.append(self.starttag({}, 'a', '', **atts))
+            self.context.append('</a></h%s>\n' % (self.section_level))
+        if check_id:
+            if node.parent.hasattr('id'):
+                self.body.append(
+                    self.starttag({}, 'a', '', name=node.parent['id']))
+                self.context.append('</a></p>\n')
+            else:
+                self.context.append('</p>\n')
+
+    def depart_title(self, node):
+        self.body.append(self.context.pop())
+
+    def visit_title_reference(self, node):
+        self.body.append(self.starttag(node, 'cite', ''))
+
+    def depart_title_reference(self, node):
+        self.body.append('</cite>')
+
+    def visit_topic(self, node):
+        self.body.append(self.starttag(node, 'div', CLASS='topic'))
+        self.topic_class = node.get('class')
+
+    def depart_topic(self, node):
+        self.body.append('</div>\n')
+        self.topic_class = ''
+
+    def visit_transition(self, node):
+        self.body.append(self.emptytag(node, 'hr'))
+
+    def depart_transition(self, node):
+        pass
+
+    def visit_version(self, node):
+        self.visit_docinfo_item(node, 'version', meta=None)
+
+    def depart_version(self, node):
+        self.depart_docinfo_item()
+
+    def visit_warning(self, node):
+        self.visit_admonition(node, 'warning')
+
+    def depart_warning(self, node):
+        self.depart_admonition()
+
+    def unimplemented_visit(self, node):
+        raise NotImplementedError('visiting unimplemented node type: %s'
+                                  % node.__class__.__name__)
+
+
+class SimpleListChecker(nodes.GenericNodeVisitor):
+
+    """
+    Raise `nodes.SkipNode` if non-simple list item is encountered.
+
+    Here "simple" means a list item containing nothing other than a single
+    paragraph, a simple list, or a paragraph followed by a simple list.
+    """
+
+    def default_visit(self, node):
+        raise nodes.NodeFound
+
+    def visit_bullet_list(self, node):
+        pass
+
+    def visit_enumerated_list(self, node):
+        pass
+
+    def visit_list_item(self, node):
+        children = []
+        for child in node.get_children():
+            if not isinstance(child, nodes.Invisible):
+                children.append(child)
+        if (children and isinstance(children[0], nodes.paragraph)
+            and (isinstance(children[-1], nodes.bullet_list)
+                 or isinstance(children[-1], nodes.enumerated_list))):
+            children.pop()
+        if len(children) <= 1:
+            return
+        else:
+            raise nodes.NodeFound
+
+    def visit_paragraph(self, node):
+        raise nodes.SkipNode
+
+    def invisible_visit(self, node):
+        """Invisible nodes should be ignored."""
+        pass
+
+    visit_comment = invisible_visit
+    visit_substitution_definition = invisible_visit
+    visit_target = invisible_visit
+    visit_pending = invisible_visit

Added: trunk/www/utils/helpers/docutils/docutils/writers/latex2e.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/writers/latex2e.py        
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/writers/latex2e.py        
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,1433 @@
+"""
+:Author: Engelbert Gruber
+:Contact: address@hidden
+:Revision: $Revision: 1.38 $
+:Date: $Date: 2003/06/23 07:52:08 $
+:Copyright: This module has been placed in the public domain.
+
+LaTeX2e document tree Writer.
+"""
+
+__docformat__ = 'reStructuredText'
+
+# code contributions from several people included, thanks too all.
+# some named: David Abrahams, Julien Letessier, who is missing.
+#
+# convention deactivate code by two # e.g. ##.
+
+import sys
+import time
+import re
+import string
+from types import ListType
+from docutils import writers, nodes, languages
+
+class Writer(writers.Writer):
+
+    supported = ('latex','latex2e')
+    """Formats this writer supports."""
+
+    settings_spec = (
+        'LaTeX-Specific Options',
+        'The LaTeX "--output-encoding" default is "latin-1:strict".',
+        (('Specify documentclass.  Default is "article".',
+          ['--documentclass'],
+          {'default': 'article', }),
+         ('Format for footnote references: one of "superscript" or '
+          '"brackets".  Default is "brackets".',
+          ['--footnote-references'],
+          {'choices': ['superscript', 'brackets'], 'default': 'brackets',
+           'metavar': '<format>'}),
+         ('Format for block quote attributions: one of "dash" (em-dash '
+          'prefix), "parentheses"/"parens", or "none".  Default is "dash".',
+          ['--attribution'],
+          {'choices': ['dash', 'parentheses', 'parens', 'none'],
+           'default': 'dash', 'metavar': '<format>'}),
+         ('Specify a stylesheet file. The file will be "input" by latex '
+          'in the document header. Default is "style.tex". '
+          'If this is set to "" disables input.'
+          'Overridden by --stylesheet-path.',
+          ['--stylesheet'],
+          {'default': 'style.tex', 'metavar': '<file>'}),
+         ('Specify a stylesheet file, relative to the current working '
+          'directory.'
+          'Overrides --stylesheet.',
+          ['--stylesheet-path'],
+          {'metavar': '<file>'}),
+         ('Link to the stylesheet in the output LaTeX file.  This is the '
+          'default.',
+          ['--link-stylesheet'],
+          {'dest': 'embed_stylesheet', 'action': 'store_false'}),
+         ('Embed the stylesheet in the output LaTeX file.  The stylesheet '
+          'file must be accessible during processing (--stylesheet-path is '
+          'recommended).',
+          ['--embed-stylesheet'],
+          {'action': 'store_true'}),
+         ('Table of contents by docutils (default) or latex. Latex(writer) '
+          'supports only one ToC per document, but docutils does not write '
+          'pagenumbers.',
+          ['--use-latex-toc'], {'default': 0}),
+         ('Color of any hyperlinks embedded in text '
+          '(default: "blue", "0" to disable).',
+          ['--hyperlink-color'], {'default': 'blue'}),))
+
+    settings_defaults = {'output_encoding': 'latin-1'}
+
+    output = None
+    """Final translated form of `document`."""
+
+    def translate(self):
+        visitor = LaTeXTranslator(self.document)
+        self.document.walkabout(visitor)
+        self.output = visitor.astext()
+        self.head_prefix = visitor.head_prefix
+        self.head = visitor.head
+        self.body_prefix = visitor.body_prefix
+        self.body = visitor.body
+        self.body_suffix = visitor.body_suffix
+
+"""
+Notes on LaTeX
+--------------
+
+* latex does not support multiple tocs in one document.
+  (might be no limitation except for docutils documentation)
+
+* width 
+
+  * linewidth - width of a line in the local environment
+  * textwidth - the width of text on the page
+
+  Maybe always use linewidth ?
+"""    
+
+class Babel:
+    """Language specifics for LaTeX."""
+    # country code by a.schlock.
+    # partly manually converted from iso and babel stuff, dialects and some
+    _ISO639_TO_BABEL = {
+        'no': 'norsk',     #XXX added by hand ( forget about nynorsk?)
+        'gd': 'scottish',  #XXX added by hand
+        'hu': 'magyar',    #XXX added by hand
+        'pt': 'portuguese',#XXX added by hand
+        'sl': 'slovenian',
+        'af': 'afrikaans',
+        'bg': 'bulgarian',
+        'br': 'breton',
+        'ca': 'catalan',
+        'cs': 'czech',
+        'cy': 'welsh',
+        'da': 'danish',
+        'fr': 'french',
+        # french, francais, canadien, acadian
+        'de': 'ngerman',  #XXX rather than german
+        # ngerman, naustrian, german, germanb, austrian
+        'el': 'greek',
+        'en': 'english',
+        # english, USenglish, american, UKenglish, british, canadian
+        'eo': 'esperanto',
+        'es': 'spanish',
+        'et': 'estonian',
+        'eu': 'basque',
+        'fi': 'finnish',
+        'ga': 'irish',
+        'gl': 'galician',
+        'he': 'hebrew',
+        'hr': 'croatian',
+        'hu': 'hungarian',
+        'is': 'icelandic',
+        'it': 'italian',
+        'la': 'latin',
+        'nl': 'dutch',
+        'pl': 'polish',
+        'pt': 'portuguese',
+        'ro': 'romanian',
+        'ru': 'russian',
+        'sk': 'slovak',
+        'sr': 'serbian',
+        'sv': 'swedish',
+        'tr': 'turkish',
+        'uk': 'ukrainian'
+    }
+
+    def __init__(self,lang):
+        self.language = lang
+        # pdflatex does not produce double quotes for ngerman in tt.
+        self.double_quote_replacment = None
+        if re.search('^de',self.language):
+            # maybe use: {\glqq} {\grqq}.
+            self.quotes = ("\"`", "\"'")
+            self.double_quote_replacment = "{\\dq}"
+        else:    
+            self.quotes = ("``", "''")
+        self.quote_index = 0
+        
+    def next_quote(self):
+        q = self.quotes[self.quote_index]
+        self.quote_index = (self.quote_index+1)%2
+        return q
+
+    def quote_quotes(self,text):
+        t = None
+        for part in text.split('"'):
+            if t == None:
+                t = part
+            else:
+                t += self.next_quote() + part
+        return t
+
+    def double_quotes_in_tt (self,text):
+        if not self.double_quote_replacment:
+            return text
+        return text.replace('"', self.double_quote_replacment)
+
+    def get_language(self):
+        if self._ISO639_TO_BABEL.has_key(self.language):
+            return self._ISO639_TO_BABEL[self.language]
+        else:
+            # support dialects.
+            l = self.language.split("_")[0]
+            if self._ISO639_TO_BABEL.has_key(l):
+                return self._ISO639_TO_BABEL[l]
+        return None
+
+
+latex_headings = {
+        'optionlist_environment' : [
+              '\\newcommand{\\optionlistlabel}[1]{\\bf #1 \\hfill}\n'
+              '\\newenvironment{optionlist}[1]\n'
+              '{\\begin{list}{}\n'
+              '  {\\setlength{\\labelwidth}{#1}\n'
+              '   \\setlength{\\rightmargin}{1cm}\n'
+              '   \\setlength{\\leftmargin}{\\rightmargin}\n'
+              '   \\addtolength{\\leftmargin}{\\labelwidth}\n'
+              '   \\addtolength{\\leftmargin}{\\labelsep}\n'
+              '   \\renewcommand{\\makelabel}{\\optionlistlabel}}\n'
+              '}{\\end{list}}\n',
+              ],
+        'footnote_floats' : [
+            '% begin: floats for footnotes tweaking.\n',
+            '\\setlength{\\floatsep}{0.5em}\n',
+            '\\setlength{\\textfloatsep}{\\fill}\n',
+            '\\addtolength{\\textfloatsep}{3em}\n',
+            '\\renewcommand{\\textfraction}{0.5}\n',
+            '\\renewcommand{\\topfraction}{0.5}\n',
+            '\\renewcommand{\\bottomfraction}{0.5}\n',
+            '\\setcounter{totalnumber}{50}\n',
+            '\\setcounter{topnumber}{50}\n',
+            '\\setcounter{bottomnumber}{50}\n',
+            '% end floats for footnotes\n',
+            ],
+         'some_commands' : [
+            '% some commands, that could be overwritten in the style file.\n'
+            '\\newcommand{\\rubric}[1]'
+            '{\\subsection*{~\\hfill {\\it #1} \\hfill ~}}\n'
+            '% end of "some commands"\n',
+         ]
+        }
+
+
+class LaTeXTranslator(nodes.NodeVisitor):
+    # When options are given to the documentclass, latex will pass them
+    # to other packages, as done with babel. 
+    # Dummy settings might be taken from document settings
+
+    d_options = '10pt'  # papersize, fontsize
+    d_paper = 'a4paper'
+    d_margins = '2cm'
+
+    latex_head = '\\documentclass[%s]{%s}\n'
+    encoding = '\\usepackage[latin1]{inputenc}\n'
+    linking = 
'\\usepackage[colorlinks=%s,linkcolor=%s,urlcolor=%s]{hyperref}\n'
+    geometry = '\\usepackage[%s,margin=%s,nohead]{geometry}\n'
+    stylesheet = '\\input{%s}\n'
+    # add a generated on day , machine by user using docutils version.
+    generator = '%% generator Docutils: http://docutils.sourceforge.net/\n'
+
+    # use latex tableofcontents or let docutils do it.
+    use_latex_toc = 0
+    # table kind: if 0 tabularx (single page), 1 longtable
+    # maybe should be decided on row count.
+    use_longtable = 1
+    # TODO: use mixins for different implementations.
+    # list environment for option-list. else tabularx
+    use_optionlist_for_option_list = 1
+    # list environment for docinfo. else tabularx
+    use_optionlist_for_docinfo = 0 # NOT YET IN USE
+
+    # default link color
+    hyperlink_color = "blue"
+
+    def __init__(self, document):
+        nodes.NodeVisitor.__init__(self, document)
+        self.settings = settings = document.settings
+        self.use_latex_toc = settings.use_latex_toc
+        self.hyperlink_color = settings.hyperlink_color
+        if self.hyperlink_color == '0':
+            self.hyperlink_color = 'black'
+            self.colorlinks = 'false'
+        else:
+            self.colorlinks = 'true'
+            
+        # language: labels, bibliographic_fields, and author_separators.
+        # to allow writing labes for specific languages.
+        self.language = languages.get_language(settings.language_code)
+        self.babel = Babel(settings.language_code)
+        self.author_separator = self.language.author_separators[0]
+        if self.babel.get_language():
+            self.d_options += ',%s' % \
+                    self.babel.get_language()
+        self.head_prefix = [
+              self.latex_head % (self.d_options,self.settings.documentclass),
+              '\\usepackage{babel}\n',     # language is in documents settings.
+              '\\usepackage{shortvrb}\n',  # allows verb in footnotes.
+              self.encoding,
+              # * tabularx: for docinfo, automatic width of columns, always on 
one page.
+              '\\usepackage{tabularx}\n',
+              '\\usepackage{longtable}\n',
+              # possible other packages.
+              # * fancyhdr
+              # * ltxtable is a combination of tabularx and longtable 
(pagebreaks).
+              #   but ??
+              #
+              # extra space between text in tables and the line above them
+              '\\setlength{\\extrarowheight}{2pt}\n',
+              '\\usepackage{amsmath}\n',   # what fore amsmath. 
+              '\\usepackage{graphicx}\n',
+              '\\usepackage{color}\n',
+              '\\usepackage{multirow}\n',
+              self.linking % (self.colorlinks, self.hyperlink_color, 
self.hyperlink_color),
+              # geometry and fonts might go into style.tex.
+              self.geometry % (self.d_paper, self.d_margins),
+              #
+              self.generator,
+              # latex lengths
+              '\\newlength{\\admonitionwidth}\n',
+              '\\setlength{\\admonitionwidth}{0.9\\textwidth}\n' 
+              # width for docinfo tablewidth
+              '\\newlength{\\docinfowidth}\n',
+              '\\setlength{\\docinfowidth}{0.9\\textwidth}\n' 
+              ]
+        self.head_prefix.extend( latex_headings['optionlist_environment'] )
+        self.head_prefix.extend( latex_headings['footnote_floats'] )
+        self.head_prefix.extend( latex_headings['some_commands'] )
+        ## stylesheet is last: so it might be possible to overwrite defaults.
+        stylesheet = self.get_stylesheet_reference()
+        if stylesheet:
+            self.head_prefix.append(self.stylesheet % (stylesheet))
+
+        if self.linking: # and maybe check for pdf
+            self.pdfinfo = [ ]
+            self.pdfauthor = None
+            # pdftitle, pdfsubject, pdfauthor, pdfkeywords, pdfcreator, 
pdfproducer
+        else:
+            self.pdfinfo = None
+        # NOTE: Latex wants a date and an author, rst puts this into
+        #   docinfo, so normally we donot want latex author/date handling.
+        # latex article has its own handling of date and author, deactivate.
+        self.latex_docinfo = 0
+        self.head = [ ]
+        if not self.latex_docinfo:
+            self.head.extend( [ '\\author{}\n', '\\date{}\n' ] )
+        self.body_prefix = ['\\raggedbottom\n']
+        # separate title, so we can appen subtitle.
+        self.title = ""
+        self.body = []
+        self.body_suffix = ['\n']
+        self.section_level = 0
+        self.context = []
+        self.topic_class = ''
+        # column specification for tables
+        self.colspecs = []
+        # Flags to encode
+        # ---------------
+        # verbatim: to tell encode not to encode.
+        self.verbatim = 0
+        # insert_newline: to tell encode to replace blanks by "~".
+        self.insert_none_breaking_blanks = 0
+        # insert_newline: to tell encode to add latex newline.
+        self.insert_newline = 0
+        # mbox_newline: to tell encode to add mbox and newline.
+        self.mbox_newline = 0
+
+        # enumeration is done by list environment.
+        self._enum_cnt = 0
+        # docinfo. 
+        self.docinfo = None
+        # inside literal block: no quote mangling.
+        self.literal_block = 0
+        self.literal = 0
+
+    def get_stylesheet_reference(self):
+        if self.settings.stylesheet_path:
+            return self.settings.stylesheet_path
+        else:
+            return self.settings.stylesheet
+
+    def language_label(self, docutil_label):
+        return self.language.labels[docutil_label]
+
+    def encode(self, text):
+        """
+        Encode special characters in `text` & return.
+            # $ % & ~ _ ^ \ { }
+        Escaping with a backslash does not help with backslashes, ~ and ^.
+
+            < > are only available in math-mode (really ?)
+            $ starts math- mode.
+        AND quotes:
+        
+        """
+        if self.verbatim:
+            return text
+        # compile the regexps once. do it here so one can see them.
+        #
+        # first the braces.
+        if not self.__dict__.has_key('encode_re_braces'):
+            self.encode_re_braces = re.compile(r'([{}])')
+        text = self.encode_re_braces.sub(r'{\\\1}',text)
+        if not self.__dict__.has_key('encode_re_bslash'):
+            # find backslash: except in the form '{\{}' or '{\}}'.
+            self.encode_re_bslash = re.compile(r'(?<!{)(\\)(?![{}]})')
+        # then the backslash: except in the form from line above:
+        # either '{\{}' or '{\}}'.
+        text = self.encode_re_bslash.sub(r'{\\textbackslash}', text)
+
+        # then dollar
+        text = text.replace("$", '{\\$}')
+        # then all that needs math mode
+        text = text.replace("<", '{$<$}')
+        text = text.replace(">", '{$>$}')
+        # then
+        text = text.replace("&", '{\\&}')
+        text = text.replace("_", '{\\_}')
+        # the ^:
+        # * verb|^| does not work in mbox.
+        # * mathmode has wedge. hat{~} would also work.
+        text = text.replace("^", '{\\ensuremath{^\\wedge}}')
+        text = text.replace("%", '{\\%}')
+        text = text.replace("#", '{\\#}')
+        text = text.replace("~", '{\\~{ }}')
+        if self.literal_block or self.literal:
+            # pdflatex does not produce doublequotes for ngerman.
+            text = self.babel.double_quotes_in_tt(text)
+        else:
+            text = self.babel.quote_quotes(text)
+        if self.insert_newline:
+            # HACK: insert a blank before the newline, to avoid 
+            # ! LaTeX Error: There's no line here to end.
+            text = text.replace("\n", '~\\\\\n')
+        elif self.mbox_newline:
+            text = text.replace("\n", '}\\\\\n\\mbox{')
+        if self.insert_none_breaking_blanks:
+            text = text.replace(' ', '~')
+        # unicode !!! 
+        text = text.replace(u'\u2020', '{$\\dagger$}')
+        return text
+
+    def attval(self, text,
+               whitespace=re.compile('[\n\r\t\v\f]')):
+        """Cleanse, encode, and return attribute value text."""
+        return self.encode(whitespace.sub(' ', text))
+
+    def astext(self):
+        if self.pdfinfo:
+            if self.pdfauthor:
+                self.pdfinfo.append('pdfauthor={%s}' % self.pdfauthor)
+            pdfinfo = '\\hypersetup{\n' + ',\n'.join(self.pdfinfo) + '\n}\n'
+        else:
+            pdfinfo = ''
+        title = '\\title{%s}\n' % self.title
+        return ''.join(self.head_prefix + [title]  
+                        + self.head + [pdfinfo]
+                        + self.body_prefix  + self.body + self.body_suffix)
+
+    def visit_Text(self, node):
+        self.body.append(self.encode(node.astext()))
+
+    def depart_Text(self, node):
+        pass
+
+    def visit_address(self, node):
+        self.visit_docinfo_item(node, 'address')
+
+    def depart_address(self, node):
+        self.depart_docinfo_item(node)
+
+    def visit_admonition(self, node, name):
+        self.body.append('\\begin{center}\\begin{sffamily}\n')
+        self.body.append('\\fbox{\\parbox{\\admonitionwidth}{\n')
+        self.body.append('\\textbf{\\large '+ self.language.labels[name] + 
'}\n');
+        self.body.append('\\vspace{2mm}\n')
+
+
+    def depart_admonition(self):
+        self.body.append('}}\n') # end parbox fbox
+        self.body.append('\\end{sffamily}\n\\end{center}\n');
+
+    def visit_attention(self, node):
+        self.visit_admonition(node, 'attention')
+
+    def depart_attention(self, node):
+        self.depart_admonition()
+
+    def visit_author(self, node):
+        self.visit_docinfo_item(node, 'author')
+
+    def depart_author(self, node):
+        self.depart_docinfo_item(node)
+
+    def visit_authors(self, node):
+        # ignore. visit_author is called for each one
+        # self.visit_docinfo_item(node, 'author')
+        pass
+
+    def depart_authors(self, node):
+        # self.depart_docinfo_item(node)
+        pass
+
+    def visit_block_quote(self, node):
+        self.body.append( '\\begin{quote}\n')
+
+    def depart_block_quote(self, node):
+        self.body.append( '\\end{quote}\n')
+
+    def visit_bullet_list(self, node):
+        if not self.use_latex_toc and self.topic_class == 'contents':
+            self.body.append( '\\begin{list}{}{}\n' )
+        else:
+            self.body.append( '\\begin{itemize}\n' )
+
+    def depart_bullet_list(self, node):
+        if not self.use_latex_toc and self.topic_class == 'contents':
+            self.body.append( '\\end{list}\n' )
+        else:
+            self.body.append( '\\end{itemize}\n' )
+
+    def visit_caption(self, node):
+        self.body.append( '\\caption{' )
+
+    def depart_caption(self, node):
+        self.body.append('}')
+
+    def visit_caution(self, node):
+        self.visit_admonition(node, 'caution')
+
+    def depart_caution(self, node):
+        self.depart_admonition()
+
+    def visit_citation(self, node):
+        self.visit_footnote(node)
+
+    def depart_citation(self, node):
+        self.depart_footnote(node)
+
+    def visit_title_reference(self, node):
+        # BUG title-references are what?
+        pass
+
+    def depart_title_reference(self, node):
+        pass
+
+    def visit_citation_reference(self, node):
+        href = ''
+        if node.has_key('refid'):
+            href = node['refid']
+        elif node.has_key('refname'):
+            href = self.document.nameids[node['refname']]
+        self.body.append('[\\hyperlink{%s}{' % href)
+
+    def depart_citation_reference(self, node):
+        self.body.append('}]')
+
+    def visit_classifier(self, node):
+        self.body.append( '(\\textbf{' )
+
+    def depart_classifier(self, node):
+        self.body.append( '})\n' )
+
+    def visit_colspec(self, node):
+        if self.use_longtable:
+            self.colspecs.append(node)
+        else:    
+            self.context[-1] += 1
+
+    def depart_colspec(self, node):
+        pass
+
+    def visit_comment(self, node,
+                      sub=re.compile('\n').sub):
+        """Escape end of line by a ne comment start in comment text."""
+        self.body.append('%% %s \n' % sub('\n% ', node.astext()))
+        raise nodes.SkipNode
+
+    def visit_contact(self, node):
+        self.visit_docinfo_item(node, 'contact')
+
+    def depart_contact(self, node):
+        self.depart_docinfo_item(node)
+
+    def visit_copyright(self, node):
+        self.visit_docinfo_item(node, 'copyright')
+
+    def depart_copyright(self, node):
+        self.depart_docinfo_item(node)
+
+    def visit_danger(self, node):
+        self.visit_admonition(node, 'danger')
+
+    def depart_danger(self, node):
+        self.depart_admonition()
+
+    def visit_date(self, node):
+        self.visit_docinfo_item(node, 'date')
+
+    def depart_date(self, node):
+        self.depart_docinfo_item(node)
+
+    def visit_decoration(self, node):
+        pass
+
+    def depart_decoration(self, node):
+        pass
+
+    def visit_definition(self, node):
+        self.body.append('%[visit_definition]\n')
+
+    def depart_definition(self, node):
+        self.body.append('\n')
+        self.body.append('%[depart_definition]\n')
+
+    def visit_definition_list(self, node):
+        self.body.append( '\\begin{description}\n' )
+
+    def depart_definition_list(self, node):
+        self.body.append( '\\end{description}\n' )
+
+    def visit_definition_list_item(self, node):
+        self.body.append('%[visit_definition_list_item]\n')
+
+    def depart_definition_list_item(self, node):
+        self.body.append('%[depart_definition_list_item]\n')
+
+    def visit_description(self, node):
+        if self.use_optionlist_for_option_list:
+            self.body.append( ' ' )
+        else:    
+            self.body.append( ' & ' )
+
+    def depart_description(self, node):
+        pass
+
+    def visit_docinfo(self, node):
+        self.docinfo = []
+        self.docinfo.append('%' + '_'*75 + '\n')
+        self.docinfo.append('\\begin{center}\n')
+        self.docinfo.append('\\begin{tabularx}{\\docinfowidth}{lX}\n')
+
+    def depart_docinfo(self, node):
+        self.docinfo.append('\\end{tabularx}\n')
+        self.docinfo.append('\\end{center}\n')
+        self.body = self.docinfo + self.body
+        # clear docinfo, so field names are no longer appended.
+        self.docinfo = None
+        if self.use_latex_toc:
+            self.body.append('\\tableofcontents\n\n\\bigskip\n')
+
+    def visit_docinfo_item(self, node, name):
+        if not self.latex_docinfo:
+            self.docinfo.append('\\textbf{%s}: &\n\t' % 
self.language_label(name))
+        if name == 'author':
+            if not self.pdfinfo == None:
+                if not self.pdfauthor:
+                    self.pdfauthor = self.attval(node.astext())
+                else:
+                    self.pdfauthor += self.author_separator + 
self.attval(node.astext())
+            if self.latex_docinfo:
+                self.head.append('\\author{%s}\n' % self.attval(node.astext()))
+                raise nodes.SkipNode
+        elif name == 'date':
+            if self.latex_docinfo:
+                self.head.append('\\date{%s}\n' % self.attval(node.astext()))
+                raise nodes.SkipNode
+        if name == 'address':
+            # BUG will fail if latex_docinfo is set.
+            self.insert_newline = 1 
+            self.docinfo.append('{\\raggedright\n')
+            self.context.append(' } \\\\\n')
+        else:    
+            self.context.append(' \\\\\n')
+        self.context.append(self.docinfo)
+        self.context.append(len(self.body))
+
+    def depart_docinfo_item(self, node):
+        size = self.context.pop()
+        dest = self.context.pop()
+        tail = self.context.pop()
+        tail = self.body[size:] + [tail]
+        del self.body[size:]
+        dest.extend(tail)
+        # for address we did set insert_newline
+        self.insert_newline = 0
+
+    def visit_doctest_block(self, node):
+        self.body.append( '\\begin{verbatim}' )
+        self.verbatim = 1
+
+    def depart_doctest_block(self, node):
+        self.body.append( '\\end{verbatim}\n' )
+        self.verbatim = 0
+
+    def visit_document(self, node):
+        self.body_prefix.append('\\begin{document}\n')
+        self.body_prefix.append('\\maketitle\n\n')
+        # alternative use titlepage environment.
+        # \begin{titlepage}
+
+    def depart_document(self, node):
+        self.body_suffix.append('\\end{document}\n')
+
+    def visit_emphasis(self, node):
+        self.body.append('\\emph{')
+
+    def depart_emphasis(self, node):
+        self.body.append('}')
+
+    def visit_entry(self, node):
+        # cell separation
+        column_one = 1
+        if self.context[-1] > 0:
+            column_one = 0
+        if not column_one:
+            self.body.append(' & ')
+
+        # multi{row,column}
+        if node.has_key('morerows') and node.has_key('morecols'):
+            raise NotImplementedError('LaTeX can\'t handle cells that'
+            'span multiple rows *and* columns, sorry.')
+        atts = {}
+        if node.has_key('morerows'):
+            count = node['morerows'] + 1
+            self.body.append('\\multirow{%d}*{' % count)
+            self.context.append('}')
+        elif node.has_key('morecols'):
+            # the vertical bar before column is missing if it is the first 
column.
+            # the one after always.
+            if column_one:
+                bar = '|'
+            else:
+                bar = ''
+            count = node['morecols'] + 1
+            self.body.append('\\multicolumn{%d}{%sl|}{' % (count, bar))
+            self.context.append('}')
+        else:
+            self.context.append('')
+
+        # header / not header
+        if isinstance(node.parent.parent, nodes.thead):
+            self.body.append('\\textbf{')
+            self.context.append('}')
+        else:
+            self.context.append('')
+
+    def depart_entry(self, node):
+        self.body.append(self.context.pop()) # header / not header
+        self.body.append(self.context.pop()) # multirow/column
+        self.context[-1] += 1
+
+    def visit_enumerated_list(self, node):
+        # We create our own enumeration list environment.
+        # This allows to set the style and starting value
+        # and unlimited nesting.
+        self._enum_cnt += 1
+
+        enum_style = {'arabic':'arabic',
+                'loweralpha':'alph',
+                'upperalpha':'Alph', 
+                'lowerroman':'roman',
+                'upperroman':'Roman' }
+        enum_suffix = ""
+        if node.has_key('suffix'):
+            enum_suffix = node['suffix']
+        enum_prefix = ""
+        if node.has_key('prefix'):
+            enum_prefix = node['prefix']
+        
+        enum_type = "arabic"
+        if node.has_key('enumtype'):
+            enum_type = node['enumtype']
+        if enum_style.has_key(enum_type):
+            enum_type = enum_style[enum_type]
+        counter_name = "listcnt%d" % self._enum_cnt;
+        self.body.append('\\newcounter{%s}\n' % counter_name)
+        self.body.append('\\begin{list}{%s\\%s{%s}%s}\n' % \
+            (enum_prefix,enum_type,counter_name,enum_suffix))
+        self.body.append('{\n')
+        self.body.append('\\usecounter{%s}\n' % counter_name)
+        # set start after usecounter, because it initializes to zero.
+        if node.has_key('start'):
+            self.body.append('\\addtocounter{%s}{%d}\n' \
+                    % (counter_name,node['start']-1))
+        ## set rightmargin equal to leftmargin
+        self.body.append('\\setlength{\\rightmargin}{\\leftmargin}\n')
+        self.body.append('}\n')
+
+    def depart_enumerated_list(self, node):
+        self.body.append('\\end{list}\n')
+
+    def visit_error(self, node):
+        self.visit_admonition(node, 'error')
+
+    def depart_error(self, node):
+        self.depart_admonition()
+
+    def visit_field(self, node):
+        # real output is done in siblings: _argument, _body, _name
+        pass
+
+    def depart_field(self, node):
+        self.body.append('\n')
+        ##self.body.append('%[depart_field]\n')
+
+    def visit_field_argument(self, node):
+        self.body.append('%[visit_field_argument]\n')
+
+    def depart_field_argument(self, node):
+        self.body.append('%[depart_field_argument]\n')
+
+    def visit_field_body(self, node):
+        # BUG by attach as text we loose references.
+        if self.docinfo:
+            self.docinfo.append('%s \\\\\n' % node.astext())
+            raise nodes.SkipNode
+        # BUG: what happens if not docinfo
+
+    def depart_field_body(self, node):
+        self.body.append( '\n' )
+
+    def visit_field_list(self, node):
+        if not self.docinfo:
+            self.body.append('\\begin{quote}\n')
+            self.body.append('\\begin{description}\n')
+
+    def depart_field_list(self, node):
+        if not self.docinfo:
+            self.body.append('\\end{description}\n')
+            self.body.append('\\end{quote}\n')
+
+    def visit_field_name(self, node):
+        # BUG this duplicates docinfo_item
+        if self.docinfo:
+            self.docinfo.append('\\textbf{%s}: &\n\t' % node.astext())
+            raise nodes.SkipNode
+        else:
+            self.body.append('\\item [')
+
+    def depart_field_name(self, node):
+        if not self.docinfo:
+            self.body.append(':]')
+
+    def visit_figure(self, node):
+        self.body.append( '\\begin{figure}\n' )
+
+    def depart_figure(self, node):
+        self.body.append( '\\end{figure}\n' )
+
+    def visit_footer(self, node):
+        self.context.append(len(self.body))
+
+    def depart_footer(self, node):
+        start = self.context.pop()
+        footer = (['\n\\begin{center}\small\n']
+                  + self.body[start:] + ['\n\\end{center}\n'])
+        self.body_suffix[:0] = footer
+        del self.body[start:]
+
+    def visit_footnote(self, node):
+        notename = node['id']
+        self.body.append('\\begin{figure}[b]')
+        self.body.append('\\hypertarget{%s}' % notename)
+
+    def depart_footnote(self, node):
+        self.body.append('\\end{figure}\n')
+
+    def visit_footnote_reference(self, node):
+        href = ''
+        if node.has_key('refid'):
+            href = node['refid']
+        elif node.has_key('refname'):
+            href = self.document.nameids[node['refname']]
+        format = self.settings.footnote_references
+        if format == 'brackets':
+            suffix = '['
+            self.context.append(']')
+        elif format == 'superscript':
+            suffix = '\\raisebox{.5em}[0em]{\\scriptsize'
+            self.context.append('}')
+        else:                           # shouldn't happen
+            raise AssertionError('Illegal footnote reference format.')
+        self.body.append('%s\\hyperlink{%s}{' % (suffix,href))
+
+    def depart_footnote_reference(self, node):
+        self.body.append('}%s' % self.context.pop())
+
+    def visit_generated(self, node):
+        pass
+
+    def depart_generated(self, node):
+        pass
+
+    def visit_header(self, node):
+        self.context.append(len(self.body))
+
+    def depart_header(self, node):
+        start = self.context.pop()
+        self.body_prefix.append('\n\\verb|begin_header|\n')
+        self.body_prefix.extend(self.body[start:])
+        self.body_prefix.append('\n\\verb|end_header|\n')
+        del self.body[start:]
+
+    def visit_hint(self, node):
+        self.visit_admonition(node, 'hint')
+
+    def depart_hint(self, node):
+        self.depart_admonition()
+
+    def visit_image(self, node):
+        atts = node.attributes.copy()
+        href = atts['uri']
+        ##self.body.append('\\begin{center}\n')
+        self.body.append('\n\\includegraphics{%s}\n' % href)
+        ##self.body.append('\\end{center}\n')
+
+    def depart_image(self, node):
+        pass
+
+    def visit_important(self, node):
+        self.visit_admonition(node, 'important')
+
+    def depart_important(self, node):
+        self.depart_admonition()
+
+    def visit_interpreted(self, node):
+        # @@@ Incomplete, pending a proper implementation on the
+        # Parser/Reader end.
+        self.visit_literal(node)
+
+    def depart_interpreted(self, node):
+        self.depart_literal(node)
+
+    def visit_label(self, node):
+        # footnote/citation label
+        self.body.append('[')
+
+    def depart_label(self, node):
+        self.body.append(']')
+
+    def visit_legend(self, node):
+        self.body.append('{\\small ')
+
+    def depart_legend(self, node):
+        self.body.append('}')
+
+    def visit_line_block(self, node):
+        """line-block: 
+        * whitespace (including linebreaks) is significant 
+        * inline markup is supported. 
+        * serif typeface
+        """
+        self.body.append('\\begin{flushleft}\n')
+        self.insert_none_breaking_blanks = 1
+        self.line_block_without_mbox = 1
+        if self.line_block_without_mbox:
+            self.insert_newline = 1
+        else:
+            self.mbox_newline = 1
+            self.body.append('\\mbox{')
+
+    def depart_line_block(self, node):
+        if self.line_block_without_mbox:
+            self.insert_newline = 0
+        else:
+            self.body.append('}')
+            self.mbox_newline = 0
+        self.insert_none_breaking_blanks = 0
+        self.body.append('\n\\end{flushleft}\n')
+
+    def visit_list_item(self, node):
+        self.body.append('\\item ')
+
+    def depart_list_item(self, node):
+        self.body.append('\n')
+
+    def visit_literal(self, node):
+        self.literal = 1
+        self.body.append('\\texttt{')
+
+    def depart_literal(self, node):
+        self.body.append('}')
+        self.literal = 0
+
+    def visit_literal_block(self, node):
+        """
+        .. parsed-literal::
+        """
+        # typically in a typewriter/monospaced typeface.
+        # care must be taken with the text, because inline markup is 
recognized.
+        # 
+        # possibilities:
+        # * verbatim: is no possibility, as inline markup does not work.
+        # * obey..: is from julien and never worked for me (grubert).
+        self.use_for_literal_block = "mbox"
+        self.literal_block = 1
+        if (self.use_for_literal_block == "mbox"):
+            self.mbox_newline = 1
+            self.insert_none_breaking_blanks = 1
+            self.body.append('\\begin{ttfamily}\\begin{flushleft}\n\\mbox{')
+        else:
+            self.body.append('{\\obeylines\\obeyspaces\\ttfamily\n')
+
+    def depart_literal_block(self, node):
+        if (self.use_for_literal_block == "mbox"):
+            self.body.append('}\n\\end{flushleft}\\end{ttfamily}\n')
+            self.insert_none_breaking_blanks = 0
+            self.mbox_newline = 0
+        else:
+            self.body.append('}\n')
+        self.literal_block = 0
+
+    def visit_meta(self, node):
+        self.body.append('[visit_meta]\n')
+        # BUG maybe set keywords for pdf
+        ##self.head.append(self.starttag(node, 'meta', **node.attributes))
+
+    def depart_meta(self, node):
+        self.body.append('[depart_meta]\n')
+
+    def visit_note(self, node):
+        self.visit_admonition(node, 'note')
+
+    def depart_note(self, node):
+        self.depart_admonition()
+
+    def visit_option(self, node):
+        if self.context[-1]:
+            # this is not the first option
+            self.body.append(', ')
+
+    def depart_option(self, node):
+        # flag tha the first option is done.
+        self.context[-1] += 1
+
+    def visit_option_argument(self, node):
+        """The delimiter betweeen an option and its argument."""
+        self.body.append(node.get('delimiter', ' '))
+
+    def depart_option_argument(self, node):
+        pass
+
+    def visit_option_group(self, node):
+        if self.use_optionlist_for_option_list:
+            self.body.append('\\item [')
+        else:
+            atts = {}
+            if len(node.astext()) > 14:
+                self.body.append('\\multicolumn{2}{l}{')
+                self.context.append('} \\\\\n  ')
+            else:
+                self.context.append('')
+            self.body.append('\\texttt{')
+        # flag for first option    
+        self.context.append(0)
+
+    def depart_option_group(self, node):
+        self.context.pop() # the flag
+        if self.use_optionlist_for_option_list:
+            self.body.append('] ')
+        else:
+            self.body.append('}')
+            self.body.append(self.context.pop())
+
+    def visit_option_list(self, node):
+        self.body.append('% [option list]\n')
+        if self.use_optionlist_for_option_list:
+            self.body.append('\\begin{optionlist}{3cm}\n')
+        else:
+            self.body.append('\\begin{center}\n')
+            # BUG: use admwidth or make it relative to textwidth ?
+            self.body.append('\\begin{tabularx}{.9\\linewidth}{lX}\n')
+
+    def depart_option_list(self, node):
+        if self.use_optionlist_for_option_list:
+            self.body.append('\\end{optionlist}\n')
+        else:
+            self.body.append('\\end{tabularx}\n')
+            self.body.append('\\end{center}\n')
+
+    def visit_option_list_item(self, node):
+        pass
+
+    def depart_option_list_item(self, node):
+        if not self.use_optionlist_for_option_list:
+            self.body.append('\\\\\n')
+
+    def visit_option_string(self, node):
+        ##self.body.append(self.starttag(node, 'span', '', CLASS='option'))
+        pass
+
+    def depart_option_string(self, node):
+        ##self.body.append('</span>')
+        pass
+
+    def visit_organization(self, node):
+        self.visit_docinfo_item(node, 'organization')
+
+    def depart_organization(self, node):
+        self.depart_docinfo_item(node)
+
+    def visit_paragraph(self, node):
+        if not self.topic_class == 'contents':
+            self.body.append('\n')
+
+    def depart_paragraph(self, node):
+        if self.topic_class == 'contents':
+            self.body.append('\n')
+        else:
+            self.body.append('\n')
+
+    def visit_problematic(self, node):
+        self.body.append('{\\color{red}\\bfseries{}')
+
+    def depart_problematic(self, node):
+        self.body.append('}')
+
+    def visit_raw(self, node):
+        if node.has_key('format') and node['format'].lower() == 'latex':
+            self.body.append(node.astext())
+        raise nodes.SkipNode
+
+    def visit_reference(self, node):
+        # for pdflatex hyperrefs might be supported
+        if node.has_key('refuri'):
+            href = node['refuri']
+        elif node.has_key('refid'):
+            href = '#' + node['refid']
+        elif node.has_key('refname'):
+            href = '#' + self.document.nameids[node['refname']]
+        ##self.body.append('[visit_reference]')
+        self.body.append('\\href{%s}{' % href)
+
+    def depart_reference(self, node):
+        self.body.append('}')
+        ##self.body.append('[depart_reference]')
+
+    def visit_revision(self, node):
+        self.visit_docinfo_item(node, 'revision')
+
+    def depart_revision(self, node):
+        self.depart_docinfo_item(node)
+
+    def visit_row(self, node):
+        self.context.append(0)
+
+    def depart_row(self, node):
+        self.context.pop()  # remove cell counter
+        self.body.append(' \\\\ \\hline\n')
+
+    def visit_section(self, node):
+        self.section_level += 1
+
+    def depart_section(self, node):
+        self.section_level -= 1
+
+    def visit_sidebar(self, node):
+        # BUG:  this is just a hack to make sidebars render something 
+        self.body.append('\\begin{center}\\begin{sffamily}\n')
+        
self.body.append('\\fbox{\\colorbox[gray]{0.80}{\\parbox{\\admonitionwidth}{\n')
+
+    def depart_sidebar(self, node):
+        self.body.append('}}}\n') # end parbox colorbox fbox
+        self.body.append('\\end{sffamily}\n\\end{center}\n');
+
+
+    attribution_formats = {'dash': ('---', ''),
+                           'parentheses': ('(', ')'),
+                           'parens': ('(', ')'),
+                           'none': ('', '')}
+
+    def visit_attribution(self, node):
+        prefix, suffix = self.attribution_formats[self.settings.attribution]
+        self.body.append('\n\\begin{flushright}\n')
+        self.body.append(prefix)
+        self.context.append(suffix)
+
+    def depart_attribution(self, node):
+        self.body.append(self.context.pop() + '\n')
+        self.body.append('\\end{flushright}\n')
+
+    def visit_status(self, node):
+        self.visit_docinfo_item(node, 'status')
+
+    def depart_status(self, node):
+        self.depart_docinfo_item(node)
+
+    def visit_strong(self, node):
+        self.body.append('\\textbf{')
+
+    def depart_strong(self, node):
+        self.body.append('}')
+
+    def visit_substitution_definition(self, node):
+        raise nodes.SkipNode
+
+    def visit_substitution_reference(self, node):
+        self.unimplemented_visit(node)
+
+    def visit_subtitle(self, node):
+        if isinstance(node.parent, nodes.sidebar):
+            self.body.append('~\\\\\n\\textbf{')
+            self.context.append('}\n\\smallskip\n')
+        else:
+            self.title = self.title + \
+                '\\\\\n\\large{%s}\n' % self.encode(node.astext()) 
+            raise nodes.SkipNode
+
+    def depart_subtitle(self, node):
+        if isinstance(node.parent, nodes.sidebar):
+            self.body.append(self.context.pop())
+
+    def visit_system_message(self, node):
+        if node['level'] < self.document.reporter['writer'].report_level:
+            raise nodes.SkipNode
+
+
+    def depart_system_message(self, node):
+        self.body.append('\n')
+
+    def get_colspecs(self):
+        """
+        Return column specification for longtable.
+
+        Assumes reST line length being 80 characters.
+        """
+        width = 80
+        
+        total_width = 0.0
+        # first see if we get too wide.
+        for node in self.colspecs:
+            colwidth = float(node['colwidth']) / width 
+            total_width += colwidth
+        # donot make it full linewidth
+        factor = 0.93
+        if total_width > 1.0:
+            factor /= total_width
+            
+        latex_table_spec = ""
+        for node in self.colspecs:
+            colwidth = factor * float(node['colwidth']) / width 
+            latex_table_spec += "|p{%.2f\\linewidth}" % colwidth
+        self.colspecs = []
+        return latex_table_spec+"|"
+
+    def visit_table(self, node):
+        if self.use_longtable:
+            self.body.append('\n\\begin{longtable}[c]')
+        else:
+            self.body.append('\n\\begin{tabularx}{\\linewidth}')
+            self.context.append('table_sentinel') # sentinel
+            self.context.append(0) # column counter
+
+    def depart_table(self, node):
+        if self.use_longtable:
+            self.body.append('\\end{longtable}\n')
+        else:    
+            self.body.append('\\end{tabularx}\n')
+            sentinel = self.context.pop()
+            if sentinel != 'table_sentinel':
+                print 'context:', self.context + [sentinel]
+                raise AssertionError
+
+    def table_preamble(self):
+        if self.use_longtable:
+            self.body.append('{%s}\n' % self.get_colspecs())
+        else:
+            if self.context[-1] != 'table_sentinel':
+                self.body.append('{%s}' % ('|X' * self.context.pop() + '|'))
+                self.body.append('\n\\hline')
+
+    def visit_target(self, node):
+        if not (node.has_key('refuri') or node.has_key('refid')
+                or node.has_key('refname')):
+            self.body.append('\\hypertarget{%s}{' % node['name'])
+            self.context.append('}')
+        else:
+            self.context.append('')
+
+    def depart_target(self, node):
+        self.body.append(self.context.pop())
+
+    def visit_tbody(self, node):
+        # BUG write preamble if not yet done (colspecs not [])
+        # for tables without heads.
+        if self.colspecs:
+            self.visit_thead(None)
+            self.depart_thead(None)
+        self.body.append('%[visit_tbody]\n')
+
+    def depart_tbody(self, node):
+        self.body.append('%[depart_tbody]\n')
+
+    def visit_term(self, node):
+        self.body.append('\\item[')
+
+    def depart_term(self, node):
+        # definition list term.
+        self.body.append(':]\n')
+
+    def visit_tgroup(self, node):
+        #self.body.append(self.starttag(node, 'colgroup'))
+        #self.context.append('</colgroup>\n')
+        pass
+
+    def depart_tgroup(self, node):
+        pass
+
+    def visit_thead(self, node):
+        # number_of_columns will be zero after get_colspecs.
+        # BUG ! push onto context for depart to pop it.
+        number_of_columns = len(self.colspecs)
+        self.table_preamble()
+        #BUG longtable needs firstpage and lastfooter too.
+        self.body.append('\\hline\n')
+
+    def depart_thead(self, node):
+        if self.use_longtable:
+            # the table header written should be on every page
+            # => \endhead
+            self.body.append('\\endhead\n')
+            # and the firsthead => \endfirsthead
+            # BUG i want a "continued from previous page" on every not
+            # firsthead, but then we need the header twice.
+            #
+            # there is a \endfoot and \endlastfoot too.
+            # but we need the number of columns to 
+            # self.body.append('\\multicolumn{%d}{c}{"..."}\n' % 
number_of_columns)
+            # self.body.append('\\hline\n\\endfoot\n')
+            # self.body.append('\\hline\n')
+            # self.body.append('\\endlastfoot\n')
+            
+
+    def visit_tip(self, node):
+        self.visit_admonition(node, 'tip')
+
+    def depart_tip(self, node):
+        self.depart_admonition()
+
+    def visit_title(self, node):
+        """Only 3 section levels are supported by LaTeX article (AFAIR)."""
+        if isinstance(node.parent, nodes.topic):
+            # section titles before the table of contents.
+            if node.parent.hasattr('id'):
+                self.body.append('\\hypertarget{%s}{}' % node.parent['id'])
+            # BUG: latex chokes on center environment with "perhaps a missing 
item".
+            # so we use hfill.
+            self.body.append('\\subsection*{~\\hfill ')
+            # the closing brace for subsection.
+            self.context.append('\\hfill ~}\n')
+        elif isinstance(node.parent, nodes.sidebar):
+            self.body.append('\\textbf{\\large ')
+            self.context.append('}\n\\smallskip\n')
+        elif self.section_level == 0:
+            # document title
+            self.title = self.encode(node.astext())
+            if not self.pdfinfo == None:
+                self.pdfinfo.append( 'pdftitle={%s}' % 
self.encode(node.astext()) )
+            raise nodes.SkipNode
+        else:
+            self.body.append('\n\n')
+            self.body.append('%' + '_' * 75)
+            self.body.append('\n\n')
+            if node.parent.hasattr('id'):
+                self.body.append('\\hypertarget{%s}{}\n' % node.parent['id'])
+            # section_level 0 is title and handled above.    
+            # BUG: latex has no deeper sections (actually paragrah is no 
section either).
+            if self.use_latex_toc:
+                section_star = ""
+            else:
+                section_star = "*"
+            if (self.section_level<=3):  # 1,2,3    
+                self.body.append('\\%ssection%s{' % 
('sub'*(self.section_level-1),section_star))
+            elif (self.section_level==4):      
+                #self.body.append('\\paragraph*{')
+                self.body.append('\\subsubsection%s{' % (section_star))
+            else:
+                #self.body.append('\\subparagraph*{')
+                self.body.append('\\subsubsection%s{' % (section_star))
+            # BUG: self.body.append( '\\label{%s}\n' % name)
+            self.context.append('}\n')
+
+    def depart_title(self, node):
+        self.body.append(self.context.pop())
+        if isinstance(node.parent, nodes.sidebar):
+            return
+        # BUG level depends on style.
+        elif node.parent.hasattr('id') and not self.use_latex_toc:
+            # pdflatex allows level 0 to 3
+            # ToC would be the only on level 0 so i choose to decrement the 
rest.
+            # "Table of contents" bookmark to see the ToC. To avoid this
+            # we set all zeroes to one.
+            l = self.section_level
+            if l>0:
+                l = l-1
+            self.body.append('\\pdfbookmark[%d]{%s}{%s}\n' % \
+                (l,node.astext(),node.parent['id']))
+
+    def visit_topic(self, node):
+        self.topic_class = node.get('class')
+        if self.use_latex_toc:
+            self.topic_class = ''
+            raise nodes.SkipNode
+
+    def depart_topic(self, node):
+        self.topic_class = ''
+        self.body.append('\n')
+
+    def visit_rubric(self, node):
+#        self.body.append('\\hfill {\\color{red}\\bfseries{}')
+#        self.context.append('} \\hfill ~\n')
+        self.body.append('\\rubric{')
+        self.context.append('}\n')
+
+    def depart_rubric(self, node):
+        self.body.append(self.context.pop())
+
+    def visit_transition(self, node):
+        self.body.append('\n\n')
+        self.body.append('%' + '_' * 75)
+        self.body.append('\n\\hspace*{\\fill}\\hrulefill\\hspace*{\\fill}')
+        self.body.append('\n\n')
+
+    def depart_transition(self, node):
+        #self.body.append('[depart_transition]')
+        pass
+
+    def visit_version(self, node):
+        self.visit_docinfo_item(node, 'version')
+
+    def depart_version(self, node):
+        self.depart_docinfo_item(node)
+
+    def visit_warning(self, node):
+        self.visit_admonition(node, 'warning')
+
+    def depart_warning(self, node):
+        self.depart_admonition()
+
+    def unimplemented_visit(self, node):
+        raise NotImplementedError('visiting unimplemented node type: %s'
+                                  % node.__class__.__name__)
+
+#    def unknown_visit(self, node):
+#    def default_visit(self, node):
+    
+# vim: set ts=4 et ai :

Added: trunk/www/utils/helpers/docutils/docutils/writers/pep_html.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/writers/pep_html.py       
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/writers/pep_html.py       
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,113 @@
+# Author: David Goodger
+# Contact: address@hidden
+# Revision: $Revision: 1.23 $
+# Date: $Date: 2003/06/16 03:26:53 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+PEP HTML Writer.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+import sys
+import docutils
+from docutils import nodes, frontend, utils
+from docutils.writers import html4css1
+
+
+class Writer(html4css1.Writer):
+
+    settings_spec = html4css1.Writer.settings_spec + (
+        'PEP/HTML-Specific Options',
+        'The HTML --footnote-references option is set to "brackets" by '
+        'default.',
+        (('Specify a PEP stylesheet URL, used verbatim.  Default is '
+          '--stylesheet\'s value.  If given, --pep-stylesheet overrides '
+          '--stylesheet.',
+          ['--pep-stylesheet'],
+          {'metavar': '<URL>'}),
+         ('Specify a PEP stylesheet file, relative to the current working '
+          'directory.  The path is adjusted relative to the output HTML '
+          'file.  Overrides --pep-stylesheet and --stylesheet-path.',
+          ['--pep-stylesheet-path'],
+          {'metavar': '<path>'}),
+         ('Specify a template file.  Default is "pep-html-template".',
+          ['--pep-template'],
+          {'default': 'pep-html-template', 'metavar': '<file>'}),
+         ('Python\'s home URL.  Default is ".." (parent directory).',
+          ['--python-home'],
+          {'default': '..', 'metavar': '<URL>'}),
+         ('Home URL prefix for PEPs.  Default is "." (current directory).',
+          ['--pep-home'],
+          {'default': '.', 'metavar': '<URL>'}),
+         # Workaround for SourceForge's broken Python
+         # (``import random`` causes a segfault).
+         (frontend.SUPPRESS_HELP,
+          ['--no-random'], {'action': 'store_true'}),))
+
+    settings_default_overrides = {'footnote_references': 'brackets'}
+
+    relative_path_settings = ('pep_stylesheet_path', 'pep_template')
+
+    def __init__(self):
+        html4css1.Writer.__init__(self)
+        self.translator_class = HTMLTranslator
+
+    def translate(self):
+        html4css1.Writer.translate(self)
+        settings = self.document.settings
+        template = open(settings.pep_template).read()
+        # Substitutions dict for template:
+        subs = {}
+        subs['encoding'] = settings.output_encoding
+        subs['version'] = docutils.__version__
+        subs['stylesheet'] = ''.join(self.stylesheet)
+        pyhome = settings.python_home
+        subs['pyhome'] = pyhome
+        subs['pephome'] = settings.pep_home
+        if pyhome == '..':
+            subs['pepindex'] = '.'
+        else:
+            subs['pepindex'] = pyhome + '/peps/'
+        index = self.document.first_child_matching_class(nodes.field_list)
+        header = self.document[index]
+        pepnum = header[0][1].astext()
+        subs['pep'] = pepnum
+        if settings.no_random:
+            subs['banner'] = 0
+        else:
+            import random
+            subs['banner'] = random.randrange(64)
+        try:
+            subs['pepnum'] = '%04i' % int(pepnum)
+        except:
+            subs['pepnum'] = pepnum
+        subs['title'] = header[1][1].astext()
+        subs['body'] = ''.join(
+            self.body_pre_docinfo + self.docinfo + self.body)
+        subs['body_suffix'] = ''.join(self.body_suffix)
+        self.output = template % subs
+
+
+class HTMLTranslator(html4css1.HTMLTranslator):
+
+    def get_stylesheet_reference(self, relative_to=None):
+        settings = self.settings
+        if relative_to == None:
+            relative_to = settings._destination
+        if settings.pep_stylesheet_path:
+            return utils.relative_path(relative_to,
+                                       settings.pep_stylesheet_path)
+        elif settings.pep_stylesheet:
+            return settings.pep_stylesheet
+        elif settings._stylesheet_path:
+            return utils.relative_path(relative_to, settings.stylesheet_path)
+        else:
+            return settings.stylesheet
+
+    def depart_field_list(self, node):
+        html4css1.HTMLTranslator.depart_field_list(self, node)
+        if node.get('class') == 'rfc2822':
+             self.body.append('<hr />\n')

Added: trunk/www/utils/helpers/docutils/docutils/writers/pseudoxml.py
===================================================================
--- trunk/www/utils/helpers/docutils/docutils/writers/pseudoxml.py      
2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/docutils/docutils/writers/pseudoxml.py      
2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,30 @@
+# Authors: David Goodger
+# Contact: address@hidden
+# Revision: $Revision: 1.3 $
+# Date: $Date: 2002/10/09 00:51:52 $
+# Copyright: This module has been placed in the public domain.
+
+"""
+Simple internal document tree Writer, writes indented pseudo-XML.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+from docutils import writers
+
+
+class Writer(writers.Writer):
+
+    supported = ('pprint', 'pformat', 'pseudoxml')
+    """Formats this writer supports."""
+
+    output = None
+    """Final translated form of `document`."""
+
+    def translate(self):
+        self.output = self.document.pformat()
+
+    def supports(self, format):
+        """This writer supports all format-specific elements."""
+        return 1

Added: trunk/www/utils/helpers/docutils/optparse.py
===================================================================
--- trunk/www/utils/helpers/docutils/optparse.py        2004-03-07 00:58:55 UTC 
(rev 5248)
+++ trunk/www/utils/helpers/docutils/optparse.py        2004-03-07 06:27:44 UTC 
(rev 5249)
@@ -0,0 +1,1401 @@
+"""optparse - a powerful, extensible, and easy-to-use option parser.
+
+By Greg Ward <address@hidden>
+
+Originally distributed as Optik; see http://optik.sourceforge.net/ .
+
+If you have problems with this module, please do not file bugs,
+patches, or feature requests with Python; instead, use Optik's
+SourceForge project page:
+  http://sourceforge.net/projects/optik
+
+For support, use the address@hidden mailing list
+(http://lists.sourceforge.net/lists/listinfo/optik-users).
+"""
+
+# Python developers: please do not make changes to this file, since
+# it is automatically generated from the Optik source code.
+
+__version__ = "1.4.1+"
+
+__all__ = ['Option',
+           'SUPPRESS_HELP',
+           'SUPPRESS_USAGE',
+           'STD_HELP_OPTION',
+           'STD_VERSION_OPTION',
+           'Values',
+           'OptionContainer',
+           'OptionGroup',
+           'OptionParser',
+           'HelpFormatter',
+           'IndentedHelpFormatter',
+           'TitledHelpFormatter',
+           'OptParseError',
+           'OptionError',
+           'OptionConflictError',
+           'OptionValueError',
+           'BadOptionError']
+
+__copyright__ = """
+Copyright (c) 2001-2003 Gregory P. Ward.  All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+  * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+
+  * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+  * Neither the name of the author nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
+IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
+PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR
+CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
+NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+"""
+
+import sys, os
+import types
+import textwrap
+
+class OptParseError (Exception):
+    def __init__ (self, msg):
+        self.msg = msg
+
+    def __str__ (self):
+        return self.msg
+
+
+class OptionError (OptParseError):
+    """
+    Raised if an Option instance is created with invalid or
+    inconsistent arguments.
+    """
+
+    def __init__ (self, msg, option):
+        self.msg = msg
+        self.option_id = str(option)
+
+    def __str__ (self):
+        if self.option_id:
+            return "option %s: %s" % (self.option_id, self.msg)
+        else:
+            return self.msg
+
+class OptionConflictError (OptionError):
+    """
+    Raised if conflicting options are added to an OptionParser.
+    """
+
+class OptionValueError (OptParseError):
+    """
+    Raised if an invalid option value is encountered on the command
+    line.
+    """
+
+class BadOptionError (OptParseError):
+    """
+    Raised if an invalid or ambiguous option is seen on the command-line.
+    """
+
+
+class HelpFormatter:
+
+    """
+    Abstract base class for formatting option help.  OptionParser
+    instances should use one of the HelpFormatter subclasses for
+    formatting help; by default IndentedHelpFormatter is used.
+
+    Instance attributes:
+      indent_increment : int
+        the number of columns to indent per nesting level
+      max_help_position : int
+        the maximum starting column for option help text
+      help_position : int
+        the calculated starting column for option help text;
+        initially the same as the maximum
+      width : int
+        total number of columns for output
+      level : int
+        current indentation level
+      current_indent : int
+        current indentation level (in columns)
+      help_width : int
+        number of columns available for option help text (calculated)
+    """
+
+    def __init__ (self,
+                  indent_increment,
+                  max_help_position,
+                  width,
+                  short_first):
+        self.indent_increment = indent_increment
+        self.help_position = self.max_help_position = max_help_position
+        self.width = width
+        self.current_indent = 0
+        self.level = 0
+        self.help_width = width - max_help_position
+        self.short_first = short_first
+
+    def indent (self):
+        self.current_indent += self.indent_increment
+        self.level += 1
+
+    def dedent (self):
+        self.current_indent -= self.indent_increment
+        assert self.current_indent >= 0, "Indent decreased below 0."
+        self.level -= 1
+
+    def format_usage (self, usage):
+        raise NotImplementedError, "subclasses must implement"
+
+    def format_heading (self, heading):
+        raise NotImplementedError, "subclasses must implement"
+
+    def format_description (self, description):
+        desc_width = self.width - self.current_indent
+        indent = " "*self.current_indent
+        return textwrap.fill(description, desc_width,
+                             initial_indent=indent,
+                             subsequent_indent=indent)
+
+    def format_option (self, option):
+        # The help for each option consists of two parts:
+        #   * the opt strings and metavars
+        #     eg. ("-x", or "-fFILENAME, --file=FILENAME")
+        #   * the user-supplied help string
+        #     eg. ("turn on expert mode", "read data from FILENAME")
+        #
+        # If possible, we write both of these on the same line:
+        #   -x      turn on expert mode
+        #
+        # But if the opt string list is too long, we put the help
+        # string on a second line, indented to the same column it would
+        # start in if it fit on the first line.
+        #   -fFILENAME, --file=FILENAME
+        #           read data from FILENAME
+        result = []
+        opts = option.option_strings
+        opt_width = self.help_position - self.current_indent - 2
+        if len(opts) > opt_width:
+            opts = "%*s%s\n" % (self.current_indent, "", opts)
+            indent_first = self.help_position
+        else:                       # start help on same line as opts
+            opts = "%*s%-*s  " % (self.current_indent, "", opt_width, opts)
+            indent_first = 0
+        result.append(opts)
+        if option.help:
+            help_lines = textwrap.wrap(option.help, self.help_width)
+            result.append("%*s%s\n" % (indent_first, "", help_lines[0]))
+            result.extend(["%*s%s\n" % (self.help_position, "", line)
+                           for line in help_lines[1:]])
+        elif opts[-1] != "\n":
+            result.append("\n")
+        return "".join(result)
+
+    def store_option_strings (self, parser):
+        self.indent()
+        max_len = 0
+        for opt in parser.option_list:
+            strings = self.format_option_strings(opt)
+            opt.option_strings = strings
+            max_len = max(max_len, len(strings) + self.current_indent)
+        self.indent()
+        for group in parser.option_groups:
+            for opt in group.option_list:
+                strings = self.format_option_strings(opt)
+                opt.option_strings = strings
+                max_len = max(max_len, len(strings) + self.current_indent)
+        self.dedent()
+        self.dedent()
+        self.help_position = min(max_len + 2, self.max_help_position)
+
+    def format_option_strings (self, option):
+        """Return a comma-separated list of option strings & metavariables."""
+        if option.takes_value():
+            metavar = option.metavar or option.dest.upper()
+            short_opts = [sopt + metavar for sopt in option._short_opts]
+            long_opts = [lopt + "=" + metavar for lopt in option._long_opts]
+        else:
+            short_opts = option._short_opts
+            long_opts = option._long_opts
+
+        if self.short_first:
+            opts = short_opts + long_opts
+        else:
+            opts = long_opts + short_opts
+
+        return ", ".join(opts)
+
+class IndentedHelpFormatter (HelpFormatter):
+    """Format help with indented section bodies.
+    """
+
+    def __init__ (self,
+                  indent_increment=2,
+                  max_help_position=24,
+                  width=78,
+                  short_first=1):
+        HelpFormatter.__init__(
+            self, indent_increment, max_help_position, width, short_first)
+
+    def format_usage (self, usage):
+        return "usage: %s\n" % usage
+
+    def format_heading (self, heading):
+        return "%*s%s:\n" % (self.current_indent, "", heading)
+
+
+class TitledHelpFormatter (HelpFormatter):
+    """Format help with underlined section headers.
+    """
+
+    def __init__ (self,
+                  indent_increment=0,
+                  max_help_position=24,
+                  width=78,
+                  short_first=0):
+        HelpFormatter.__init__ (
+            self, indent_increment, max_help_position, width, short_first)
+
+    def format_usage (self, usage):
+        return "%s  %s\n" % (self.format_heading("Usage"), usage)
+
+    def format_heading (self, heading):
+        return "%s\n%s\n" % (heading, "=-"[self.level] * len(heading))
+
+
+_builtin_cvt = { "int" : (int, "integer"),
+                 "long" : (long, "long integer"),
+                 "float" : (float, "floating-point"),
+                 "complex" : (complex, "complex") }
+
+def check_builtin (option, opt, value):
+    (cvt, what) = _builtin_cvt[option.type]
+    try:
+        return cvt(value)
+    except ValueError:
+        raise OptionValueError(
+            #"%s: invalid %s argument %r" % (opt, what, value))
+            "option %s: invalid %s value: %r" % (opt, what, value))
+
+def check_choice(option, opt, value):
+    if value in option.choices:
+        return value
+    else:
+        choices = ", ".join(map(repr, option.choices))
+        raise OptionValueError(
+            "option %s: invalid choice: %r (choose from %s)"
+            % (opt, value, choices))
+
+# Not supplying a default is different from a default of None,
+# so we need an explicit "not supplied" value.
+NO_DEFAULT = "NO"+"DEFAULT"
+
+
+class Option:
+    """
+    Instance attributes:
+      _short_opts : [string]
+      _long_opts : [string]
+
+      action : string
+      type : string
+      dest : string
+      default : any
+      nargs : int
+      const : any
+      choices : [string]
+      callback : function
+      callback_args : (any*)
+      callback_kwargs : { string : any }
+      help : string
+      metavar : string
+    """
+
+    # The list of instance attributes that may be set through
+    # keyword args to the constructor.
+    ATTRS = ['action',
+             'type',
+             'dest',
+             'default',
+             'nargs',
+             'const',
+             'choices',
+             'callback',
+             'callback_args',
+             'callback_kwargs',
+             'help',
+             'metavar']
+
+    # The set of actions allowed by option parsers.  Explicitly listed
+    # here so the constructor can validate its arguments.
+    ACTIONS = ("store",
+               "store_const",
+               "store_true",
+               "store_false",
+               "append",
+               "count",
+               "callback",
+               "help",
+               "version")
+
+    # The set of actions that involve storing a value somewhere;
+    # also listed just for constructor argument validation.  (If
+    # the action is one of these, there must be a destination.)
+    STORE_ACTIONS = ("store",
+                     "store_const",
+                     "store_true",
+                     "store_false",
+                     "append",
+                     "count")
+
+    # The set of actions for which it makes sense to supply a value
+    # type, ie. where we expect an argument to this option.
+    TYPED_ACTIONS = ("store",
+                     "append",
+                     "callback")
+
+    # The set of known types for option parsers.  Again, listed here for
+    # constructor argument validation.
+    TYPES = ("string", "int", "long", "float", "complex", "choice")
+
+    # Dictionary of argument checking functions, which convert and
+    # validate option arguments according to the option type.
+    #
+    # Signature of checking functions is:
+    #   check(option : Option, opt : string, value : string) -> any
+    # where
+    #   option is the Option instance calling the checker
+    #   opt is the actual option seen on the command-line
+    #     (eg. "-a", "--file")
+    #   value is the option argument seen on the command-line
+    #
+    # The return value should be in the appropriate Python type
+    # for option.type -- eg. an integer if option.type == "int".
+    #
+    # If no checker is defined for a type, arguments will be
+    # unchecked and remain strings.
+    TYPE_CHECKER = { "int"    : check_builtin,
+                     "long"   : check_builtin,
+                     "float"  : check_builtin,
+                     "complex"  : check_builtin,
+                     "choice" : check_choice,
+                   }
+
+
+    # CHECK_METHODS is a list of unbound method objects; they are called
+    # by the constructor, in order, after all attributes are
+    # initialized.  The list is created and filled in later, after all
+    # the methods are actually defined.  (I just put it here because I
+    # like to define and document all class attributes in the same
+    # place.)  Subclasses that add another _check_*() method should
+    # define their own CHECK_METHODS list that adds their check method
+    # to those from this class.
+    CHECK_METHODS = None
+
+
+    # -- Constructor/initialization methods ----------------------------
+
+    def __init__ (self, *opts, **attrs):
+        # Set _short_opts, _long_opts attrs from 'opts' tuple.
+        # Have to be set now, in case no option strings are supplied.
+        self._short_opts = []
+        self._long_opts = []
+        opts = self._check_opt_strings(opts)
+        self._set_opt_strings(opts)
+
+        # Set all other attrs (action, type, etc.) from 'attrs' dict
+        self._set_attrs(attrs)
+
+        # Check all the attributes we just set.  There are lots of
+        # complicated interdependencies, but luckily they can be farmed
+        # out to the _check_*() methods listed in CHECK_METHODS -- which
+        # could be handy for subclasses!  The one thing these all share
+        # is that they raise OptionError if they discover a problem.
+        for checker in self.CHECK_METHODS:
+            checker(self)
+
+    def _check_opt_strings (self, opts):
+        # Filter out None because early versions of Optik had exactly
+        # one short option and one long option, either of which
+        # could be None.
+        opts = filter(None, opts)
+        if not opts:
+            raise TypeError("at least one option string must be supplied")
+        return opts
+
+    def _set_opt_strings (self, opts):
+        for opt in opts:
+            if len(opt) < 2:
+                raise OptionError(
+                    "invalid option string %r: "
+                    "must be at least two characters long" % opt, self)
+            elif len(opt) == 2:
+                if not (opt[0] == "-" and opt[1] != "-"):
+                    raise OptionError(
+                        "invalid short option string %r: "
+                        "must be of the form -x, (x any non-dash char)" % opt,
+                        self)
+                self._short_opts.append(opt)
+            else:
+                if not (opt[0:2] == "--" and opt[2] != "-"):
+                    raise OptionError(
+                        "invalid long option string %r: "
+                        "must start with --, followed by non-dash" % opt,
+                        self)
+                self._long_opts.append(opt)
+
+    def _set_attrs (self, attrs):
+        for attr in self.ATTRS:
+            if attrs.has_key(attr):
+                setattr(self, attr, attrs[attr])
+                del attrs[attr]
+            else:
+                if attr == 'default':
+                    setattr(self, attr, NO_DEFAULT)
+                else:
+                    setattr(self, attr, None)
+        if attrs:
+            raise OptionError(
+                "invalid keyword arguments: %s" % ", ".join(attrs.keys()),
+                self)
+
+
+    # -- Constructor validation methods --------------------------------
+
+    def _check_action (self):
+        if self.action is None:
+            self.action = "store"
+        elif self.action not in self.ACTIONS:
+            raise OptionError("invalid action: %r" % self.action, self)
+
+    def _check_type (self):
+        if self.type is None:
+            # XXX should factor out another class attr here: list of
+            # actions that *require* a type
+            if self.action in ("store", "append"):
+                if self.choices is not None:
+                    # The "choices" attribute implies "choice" type.
+                    self.type = "choice"
+                else:
+                    # No type given?  "string" is the most sensible default.
+                    self.type = "string"
+        else:
+            if self.type not in self.TYPES:
+                raise OptionError("invalid option type: %r" % self.type, self)
+            if self.action not in self.TYPED_ACTIONS:
+                raise OptionError(
+                    "must not supply a type for action %r" % self.action, self)
+
+    def _check_choice(self):
+        if self.type == "choice":
+            if self.choices is None:
+                raise OptionError(
+                    "must supply a list of choices for type 'choice'", self)
+            elif type(self.choices) not in (types.TupleType, types.ListType):
+                raise OptionError(
+                    "choices must be a list of strings ('%s' supplied)"
+                    % str(type(self.choices)).split("'")[1], self)
+        elif self.choices is not None:
+            raise OptionError(
+                "must not supply choices for type %r" % self.type, self)
+
+    def _check_dest (self):
+        if self.action in self.STORE_ACTIONS and self.dest is None:
+            # No destination given, and we need one for this action.
+            # Glean a destination from the first long option string,
+            # or from the first short option string if no long options.
+            if self._long_opts:
+                # eg. "--foo-bar" -> "foo_bar"
+                self.dest = self._long_opts[0][2:].replace('-', '_')
+            else:
+                self.dest = self._short_opts[0][1]
+
+    def _check_const (self):
+        if self.action != "store_const" and self.const is not None:
+            raise OptionError(
+                "'const' must not be supplied for action %r" % self.action,
+                self)
+
+    def _check_nargs (self):
+        if self.action in self.TYPED_ACTIONS:
+            if self.nargs is None:
+                self.nargs = 1
+        elif self.nargs is not None:
+            raise OptionError(
+                "'nargs' must not be supplied for action %r" % self.action,
+                self)
+
+    def _check_callback (self):
+        if self.action == "callback":
+            if not callable(self.callback):
+                raise OptionError(
+                    "callback not callable: %r" % self.callback, self)
+            if (self.callback_args is not None and
+                type(self.callback_args) is not types.TupleType):
+                raise OptionError(
+                    "callback_args, if supplied, must be a tuple: not %r"
+                    % self.callback_args, self)
+            if (self.callback_kwargs is not None and
+                type(self.callback_kwargs) is not types.DictType):
+                raise OptionError(
+                    "callback_kwargs, if supplied, must be a dict: not %r"
+                    % self.callback_kwargs, self)
+        else:
+            if self.callback is not None:
+                raise OptionError(
+                    "callback supplied (%r) for non-callback option"
+                    % self.callback, self)
+            if self.callback_args is not None:
+                raise OptionError(
+                    "callback_args supplied for non-callback option", self)
+            if self.callback_kwargs is not None:
+                raise OptionError(
+                    "callback_kwargs supplied for non-callback option", self)
+
+
+    CHECK_METHODS = [_check_action,
+                     _check_type,
+                     _check_choice,
+                     _check_dest,
+                     _check_const,
+                     _check_nargs,
+                     _check_callback]
+
+
+    # -- Miscellaneous methods -----------------------------------------
+
+    def __str__ (self):
+        return "/".join(self._short_opts + self._long_opts)
+
+    def takes_value (self):
+        return self.type is not None
+
+
+    # -- Processing methods --------------------------------------------
+
+    def check_value (self, opt, value):
+        checker = self.TYPE_CHECKER.get(self.type)
+        if checker is None:
+            return value
+        else:
+            return checker(self, opt, value)
+
+    def process (self, opt, value, values, parser):
+
+        # First, convert the value(s) to the right type.  Howl if any
+        # value(s) are bogus.
+        if value is not None:
+            if self.nargs == 1:
+                value = self.check_value(opt, value)
+            else:
+                value = tuple([self.check_value(opt, v) for v in value])
+
+        # And then take whatever action is expected of us.
+        # This is a separate method to make life easier for
+        # subclasses to add new actions.
+        return self.take_action(
+            self.action, self.dest, opt, value, values, parser)
+
+    def take_action (self, action, dest, opt, value, values, parser):
+        if action == "store":
+            setattr(values, dest, value)
+        elif action == "store_const":
+            setattr(values, dest, self.const)
+        elif action == "store_true":
+            setattr(values, dest, True)
+        elif action == "store_false":
+            setattr(values, dest, False)
+        elif action == "append":
+            values.ensure_value(dest, []).append(value)
+        elif action == "count":
+            setattr(values, dest, values.ensure_value(dest, 0) + 1)
+        elif action == "callback":
+            args = self.callback_args or ()
+            kwargs = self.callback_kwargs or {}
+            self.callback(self, opt, value, parser, *args, **kwargs)
+        elif action == "help":
+            parser.print_help()
+            sys.exit(0)
+        elif action == "version":
+            parser.print_version()
+            sys.exit(0)
+        else:
+            raise RuntimeError, "unknown action %r" % self.action
+
+        return 1
+
+# class Option
+
+
+SUPPRESS_HELP = "SUPPRESS"+"HELP"
+SUPPRESS_USAGE = "SUPPRESS"+"USAGE"
+
+STD_HELP_OPTION = Option("-h", "--help",
+                         action="help",
+                         help="show this help message and exit")
+STD_VERSION_OPTION = Option("--version",
+                            action="version",
+                            help="show program's version number and exit")
+
+
+class Values:
+
+    def __init__ (self, defaults=None):
+        if defaults:
+            for (attr, val) in defaults.items():
+                setattr(self, attr, val)
+
+    def __repr__ (self):
+        return ("<%s at 0x%x: %r>"
+                % (self.__class__.__name__, id(self), self.__dict__))
+
+    def _update_careful (self, dict):
+        """
+        Update the option values from an arbitrary dictionary, but only
+        use keys from dict that already have a corresponding attribute
+        in self.  Any keys in dict without a corresponding attribute
+        are silently ignored.
+        """
+        for attr in dir(self):
+            if dict.has_key(attr):
+                dval = dict[attr]
+                if dval is not None:
+                    setattr(self, attr, dval)
+
+    def _update_loose (self, dict):
+        """
+        Update the option values from an arbitrary dictionary,
+        using all keys from the dictionary regardless of whether
+        they have a corresponding attribute in self or not.
+        """
+        self.__dict__.update(dict)
+
+    def _update (self, dict, mode):
+        if mode == "careful":
+            self._update_careful(dict)
+        elif mode == "loose":
+            self._update_loose(dict)
+        else:
+            raise ValueError, "invalid update mode: %r" % mode
+
+    def read_module (self, modname, mode="careful"):
+        __import__(modname)
+        mod = sys.modules[modname]
+        self._update(vars(mod), mode)
+
+    def read_file (self, filename, mode="careful"):
+        vars = {}
+        execfile(filename, vars)
+        self._update(vars, mode)
+
+    def ensure_value (self, attr, value):
+        if not hasattr(self, attr) or getattr(self, attr) is None:
+            setattr(self, attr, value)
+        return getattr(self, attr)
+
+
+class OptionContainer:
+
+    """
+    Abstract base class.
+
+    Class attributes:
+      standard_option_list : [Option]
+        list of standard options that will be accepted by all instances
+        of this parser class (intended to be overridden by subclasses).
+
+    Instance attributes:
+      option_list : [Option]
+        the list of Option objects contained by this OptionContainer
+      _short_opt : { string : Option }
+        dictionary mapping short option strings, eg. "-f" or "-X",
+        to the Option instances that implement them.  If an Option
+        has multiple short option strings, it will appears in this
+        dictionary multiple times. [1]
+      _long_opt : { string : Option }
+        dictionary mapping long option strings, eg. "--file" or
+        "--exclude", to the Option instances that implement them.
+        Again, a given Option can occur multiple times in this
+        dictionary. [1]
+      defaults : { string : any }
+        dictionary mapping option destination names to default
+        values for each destination [1]
+
+    [1] These mappings are common to (shared by) all components of the
+        controlling OptionParser, where they are initially created.
+
+    """
+
+    def __init__ (self, option_class, conflict_handler, description):
+        # Initialize the option list and related data structures.
+        # This method must be provided by subclasses, and it must
+        # initialize at least the following instance attributes:
+        # option_list, _short_opt, _long_opt, defaults.
+        self._create_option_list()
+
+        self.option_class = option_class
+        self.set_conflict_handler(conflict_handler)
+        self.set_description(description)
+
+    def _create_option_mappings (self):
+        # For use by OptionParser constructor -- create the master
+        # option mappings used by this OptionParser and all
+        # OptionGroups that it owns.
+        self._short_opt = {}            # single letter -> Option instance
+        self._long_opt = {}             # long option -> Option instance
+        self.defaults = {}              # maps option dest -> default value
+
+
+    def _share_option_mappings (self, parser):
+        # For use by OptionGroup constructor -- use shared option
+        # mappings from the OptionParser that owns this OptionGroup.
+        self._short_opt = parser._short_opt
+        self._long_opt = parser._long_opt
+        self.defaults = parser.defaults
+
+    def set_conflict_handler (self, handler):
+        if handler not in ("ignore", "error", "resolve"):
+            raise ValueError, "invalid conflict_resolution value %r" % handler
+        self.conflict_handler = handler
+
+    def set_description (self, description):
+        self.description = description
+
+
+    # -- Option-adding methods -----------------------------------------
+
+    def _check_conflict (self, option):
+        conflict_opts = []
+        for opt in option._short_opts:
+            if self._short_opt.has_key(opt):
+                conflict_opts.append((opt, self._short_opt[opt]))
+        for opt in option._long_opts:
+            if self._long_opt.has_key(opt):
+                conflict_opts.append((opt, self._long_opt[opt]))
+
+        if conflict_opts:
+            handler = self.conflict_handler
+            if handler == "ignore":     # behaviour for Optik 1.0, 1.1
+                pass
+            elif handler == "error":    # new in 1.2
+                raise OptionConflictError(
+                    "conflicting option string(s): %s"
+                    % ", ".join([co[0] for co in conflict_opts]),
+                    option)
+            elif handler == "resolve":  # new in 1.2
+                for (opt, c_option) in conflict_opts:
+                    if opt.startswith("--"):
+                        c_option._long_opts.remove(opt)
+                        del self._long_opt[opt]
+                    else:
+                        c_option._short_opts.remove(opt)
+                        del self._short_opt[opt]
+                    if not (c_option._short_opts or c_option._long_opts):
+                        c_option.container.option_list.remove(c_option)
+
+    def add_option (self, *args, **kwargs):
+        """add_option(Option)
+           add_option(opt_str, ..., kwarg=val, ...)
+        """
+        if type(args[0]) is types.StringType:
+            option = self.option_class(*args, **kwargs)
+        elif len(args) == 1 and not kwargs:
+            option = args[0]
+            if not isinstance(option, Option):
+                raise TypeError, "not an Option instance: %r" % option
+        else:
+            raise TypeError, "invalid arguments"
+
+        self._check_conflict(option)
+
+        self.option_list.append(option)
+        option.container = self
+        for opt in option._short_opts:
+            self._short_opt[opt] = option
+        for opt in option._long_opts:
+            self._long_opt[opt] = option
+
+        if option.dest is not None:     # option has a dest, we need a default
+            if option.default is not NO_DEFAULT:
+                self.defaults[option.dest] = option.default
+            elif not self.defaults.has_key(option.dest):
+                self.defaults[option.dest] = None
+
+        return option
+
+    def add_options (self, option_list):
+        for option in option_list:
+            self.add_option(option)
+
+    # -- Option query/removal methods ----------------------------------
+
+    def get_option (self, opt_str):
+        return (self._short_opt.get(opt_str) or
+                self._long_opt.get(opt_str))
+
+    def has_option (self, opt_str):
+        return (self._short_opt.has_key(opt_str) or
+                self._long_opt.has_key(opt_str))
+
+    def remove_option (self, opt_str):
+        option = self._short_opt.get(opt_str)
+        if option is None:
+            option = self._long_opt.get(opt_str)
+        if option is None:
+            raise ValueError("no such option %r" % opt_str)
+
+        for opt in option._short_opts:
+            del self._short_opt[opt]
+        for opt in option._long_opts:
+            del self._long_opt[opt]
+        option.container.option_list.remove(option)
+
+
+    # -- Help-formatting methods ---------------------------------------
+
+    def format_option_help (self, formatter):
+        if not self.option_list:
+            return ""
+        result = []
+        for option in self.option_list:
+            if not option.help is SUPPRESS_HELP:
+                result.append(formatter.format_option(option))
+        return "".join(result)
+
+    def format_description (self, formatter):
+        if self.description:
+            return formatter.format_description(self.description)
+        else:
+            return ""
+
+    def format_help (self, formatter):
+        if self.description:
+            desc = self.format_description(formatter) + "\n"
+        else:
+            desc = ""
+        return desc + self.format_option_help(formatter)
+
+
+class OptionGroup (OptionContainer):
+
+    def __init__ (self, parser, title, description=None):
+        self.parser = parser
+        OptionContainer.__init__(
+            self, parser.option_class, parser.conflict_handler, description)
+        self.title = title
+
+    def _create_option_list (self):
+        self.option_list = []
+        self._share_option_mappings(self.parser)
+
+    def set_title (self, title):
+        self.title = title
+
+    # -- Help-formatting methods ---------------------------------------
+
+    def format_help (self, formatter):
+        result = formatter.format_heading(self.title)
+        formatter.indent()
+        result += OptionContainer.format_help(self, formatter)
+        formatter.dedent()
+        return result
+
+
+class OptionParser (OptionContainer):
+
+    """
+    Class attributes:
+      standard_option_list : [Option]
+        list of standard options that will be accepted by all instances
+        of this parser class (intended to be overridden by subclasses).
+
+    Instance attributes:
+      usage : string
+        a usage string for your program.  Before it is displayed
+        to the user, "%prog" will be expanded to the name of
+        your program (self.prog or os.path.basename(sys.argv[0])).
+      prog : string
+        the name of the current program (to override
+        os.path.basename(sys.argv[0])).
+
+      allow_interspersed_args : boolean = true
+        if true, positional arguments may be interspersed with options.
+        Assuming -a and -b each take a single argument, the command-line
+          -ablah foo bar -bboo baz
+        will be interpreted the same as
+          -ablah -bboo -- foo bar baz
+        If this flag were false, that command line would be interpreted as
+          -ablah -- foo bar -bboo baz
+        -- ie. we stop processing options as soon as we see the first
+        non-option argument.  (This is the tradition followed by
+        Python's getopt module, Perl's Getopt::Std, and other argument-
+        parsing libraries, but it is generally annoying to users.)
+
+      rargs : [string]
+        the argument list currently being parsed.  Only set when
+        parse_args() is active, and continually trimmed down as
+        we consume arguments.  Mainly there for the benefit of
+        callback options.
+      largs : [string]
+        the list of leftover arguments that we have skipped while
+        parsing options.  If allow_interspersed_args is false, this
+        list is always empty.
+      values : Values
+        the set of option values currently being accumulated.  Only
+        set when parse_args() is active.  Also mainly for callbacks.
+
+    Because of the 'rargs', 'largs', and 'values' attributes,
+    OptionParser is not thread-safe.  If, for some perverse reason, you
+    need to parse command-line arguments simultaneously in different
+    threads, use different OptionParser instances.
+
+    """
+
+    standard_option_list = []
+
+    def __init__ (self,
+                  usage=None,
+                  option_list=None,
+                  option_class=Option,
+                  version=None,
+                  conflict_handler="error",
+                  description=None,
+                  formatter=None,
+                  add_help_option=1,
+                  prog=None):
+        OptionContainer.__init__(
+            self, option_class, conflict_handler, description)
+        self.set_usage(usage)
+        self.prog = prog
+        self.version = version
+        self.allow_interspersed_args = 1
+        if formatter is None:
+            formatter = IndentedHelpFormatter()
+        self.formatter = formatter
+
+        # Populate the option list; initial sources are the
+        # standard_option_list class attribute, the 'option_list'
+        # argument, and the STD_VERSION_OPTION (if 'version' supplied)
+        # and STD_HELP_OPTION globals.
+        self._populate_option_list(option_list,
+                                   add_help=add_help_option)
+
+        self._init_parsing_state()
+
+    # -- Private methods -----------------------------------------------
+    # (used by our or OptionContainer's constructor)
+
+    def _create_option_list (self):
+        self.option_list = []
+        self.option_groups = []
+        self._create_option_mappings()
+
+    def _populate_option_list (self, option_list, add_help=1):
+        if self.standard_option_list:
+            self.add_options(self.standard_option_list)
+        if option_list:
+            self.add_options(option_list)
+        if self.version:
+            self.add_option(STD_VERSION_OPTION)
+        if add_help:
+            self.add_option(STD_HELP_OPTION)
+
+    def _init_parsing_state (self):
+        # These are set in parse_args() for the convenience of callbacks.
+        self.rargs = None
+        self.largs = None
+        self.values = None
+
+
+    # -- Simple modifier methods ---------------------------------------
+
+    def set_usage (self, usage):
+        if usage is None:
+            self.usage = "%prog [options]"
+        elif usage is SUPPRESS_USAGE:
+            self.usage = None
+        elif usage.startswith("usage: "):
+            # for backwards compatibility with Optik 1.3 and earlier
+            self.usage = usage[7:]
+        else:
+            self.usage = usage
+
+    def enable_interspersed_args (self):
+        self.allow_interspersed_args = 1
+
+    def disable_interspersed_args (self):
+        self.allow_interspersed_args = 0
+
+    def set_default (self, dest, value):
+        self.defaults[dest] = value
+
+    def set_defaults (self, **kwargs):
+        self.defaults.update(kwargs)
+
+    def get_default_values (self):
+        return Values(self.defaults)
+
+
+    # -- OptionGroup methods -------------------------------------------
+
+    def add_option_group (self, *args, **kwargs):
+        # XXX lots of overlap with OptionContainer.add_option()
+        if type(args[0]) is types.StringType:
+            group = OptionGroup(self, *args, **kwargs)
+        elif len(args) == 1 and not kwargs:
+            group = args[0]
+            if not isinstance(group, OptionGroup):
+                raise TypeError, "not an OptionGroup instance: %r" % group
+            if group.parser is not self:
+                raise ValueError, "invalid OptionGroup (wrong parser)"
+        else:
+            raise TypeError, "invalid arguments"
+
+        self.option_groups.append(group)
+        return group
+
+    def get_option_group (self, opt_str):
+        option = (self._short_opt.get(opt_str) or
+                  self._long_opt.get(opt_str))
+        if option and option.container is not self:
+            return option.container
+        return None
+
+
+    # -- Option-parsing methods ----------------------------------------
+
+    def _get_args (self, args):
+        if args is None:
+            return sys.argv[1:]
+        else:
+            return args[:]              # don't modify caller's list
+
+    def parse_args (self, args=None, values=None):
+        """
+        parse_args(args : [string] = sys.argv[1:],
+                   values : Values = None)
+        -> (values : Values, args : [string])
+
+        Parse the command-line options found in 'args' (default:
+        sys.argv[1:]).  Any errors result in a call to 'error()', which
+        by default prints the usage message to stderr and calls
+        sys.exit() with an error message.  On success returns a pair
+        (values, args) where 'values' is an Values instance (with all
+        your option values) and 'args' is the list of arguments left
+        over after parsing options.
+        """
+        rargs = self._get_args(args)
+        if values is None:
+            values = self.get_default_values()
+
+        # Store the halves of the argument list as attributes for the
+        # convenience of callbacks:
+        #   rargs
+        #     the rest of the command-line (the "r" stands for
+        #     "remaining" or "right-hand")
+        #   largs
+        #     the leftover arguments -- ie. what's left after removing
+        #     options and their arguments (the "l" stands for "leftover"
+        #     or "left-hand")
+        self.rargs = rargs
+        self.largs = largs = []
+        self.values = values
+
+        try:
+            stop = self._process_args(largs, rargs, values)
+        except (BadOptionError, OptionValueError), err:
+            self.error(err.msg)
+
+        args = largs + rargs
+        return self.check_values(values, args)
+
+    def check_values (self, values, args):
+        """
+        check_values(values : Values, args : [string])
+        -> (values : Values, args : [string])
+
+        Check that the supplied option values and leftover arguments are
+        valid.  Returns the option values and leftover arguments
+        (possibly adjusted, possibly completely new -- whatever you
+        like).  Default implementation just returns the passed-in
+        values; subclasses may override as desired.
+        """
+        return (values, args)
+
+    def _process_args (self, largs, rargs, values):
+        """_process_args(largs : [string],
+                         rargs : [string],
+                         values : Values)
+
+        Process command-line arguments and populate 'values', consuming
+        options and arguments from 'rargs'.  If 'allow_interspersed_args' is
+        false, stop at the first non-option argument.  If true, accumulate any
+        interspersed non-option arguments in 'largs'.
+        """
+        while rargs:
+            arg = rargs[0]
+            # We handle bare "--" explicitly, and bare "-" is handled by the
+            # standard arg handler since the short arg case ensures that the
+            # len of the opt string is greater than 1.
+            if arg == "--":
+                del rargs[0]
+                return
+            elif arg[0:2] == "--":
+                # process a single long option (possibly with value(s))
+                self._process_long_opt(rargs, values)
+            elif arg[:1] == "-" and len(arg) > 1:
+                # process a cluster of short options (possibly with
+                # value(s) for the last one only)
+                self._process_short_opts(rargs, values)
+            elif self.allow_interspersed_args:
+                largs.append(arg)
+                del rargs[0]
+            else:
+                return                  # stop now, leave this arg in rargs
+
+        # Say this is the original argument list:
+        # [arg0, arg1, ..., arg(i-1), arg(i), arg(i+1), ..., arg(N-1)]
+        #                            ^
+        # (we are about to process arg(i)).
+        #
+        # Then rargs is [arg(i), ..., arg(N-1)] and largs is a *subset* of
+        # [arg0, ..., arg(i-1)] (any options and their arguments will have
+        # been removed from largs).
+        #
+        # The while loop will usually consume 1 or more arguments per pass.
+        # If it consumes 1 (eg. arg is an option that takes no arguments),
+        # then after _process_arg() is done the situation is:
+        #
+        #   largs = subset of [arg0, ..., arg(i)]
+        #   rargs = [arg(i+1), ..., arg(N-1)]
+        #
+        # If allow_interspersed_args is false, largs will always be
+        # *empty* -- still a subset of [arg0, ..., arg(i-1)], but
+        # not a very interesting subset!
+
+    def _match_long_opt (self, opt):
+        """_match_long_opt(opt : string) -> string
+
+        Determine which long option string 'opt' matches, ie. which one
+        it is an unambiguous abbrevation for.  Raises BadOptionError if
+        'opt' doesn't unambiguously match any long option string.
+        """
+        return _match_abbrev(opt, self._long_opt)
+
+    def _process_long_opt (self, rargs, values):
+        arg = rargs.pop(0)
+
+        # Value explicitly attached to arg?  Pretend it's the next
+        # argument.
+        if "=" in arg:
+            (opt, next_arg) = arg.split("=", 1)
+            rargs.insert(0, next_arg)
+            had_explicit_value = 1
+        else:
+            opt = arg
+            had_explicit_value = 0
+
+        opt = self._match_long_opt(opt)
+        option = self._long_opt[opt]
+        if option.takes_value():
+            nargs = option.nargs
+            if len(rargs) < nargs:
+                if nargs == 1:
+                    self.error("%s option requires a value" % opt)
+                else:
+                    self.error("%s option requires %d values"
+                               % (opt, nargs))
+            elif nargs == 1:
+                value = rargs.pop(0)
+            else:
+                value = tuple(rargs[0:nargs])
+                del rargs[0:nargs]
+
+        elif had_explicit_value:
+            self.error("%s option does not take a value" % opt)
+
+        else:
+            value = None
+
+        option.process(opt, value, values, self)
+
+    def _process_short_opts (self, rargs, values):
+        arg = rargs.pop(0)
+        stop = 0
+        i = 1
+        for ch in arg[1:]:
+            opt = "-" + ch
+            option = self._short_opt.get(opt)
+            i += 1                      # we have consumed a character
+
+            if not option:
+                self.error("no such option: %s" % opt)
+            if option.takes_value():
+                # Any characters left in arg?  Pretend they're the
+                # next arg, and stop consuming characters of arg.
+                if i < len(arg):
+                    rargs.insert(0, arg[i:])
+                    stop = 1
+
+                nargs = option.nargs
+                if len(rargs) < nargs:
+                    if nargs == 1:
+                        self.error("%s option requires a value" % opt)
+                    else:
+                        self.error("%s option requires %s values"
+                                   % (opt, nargs))
+                elif nargs == 1:
+                    value = rargs.pop(0)
+                else:
+                    value = tuple(rargs[0:nargs])
+                    del rargs[0:nargs]
+
+            else:                       # option doesn't take a value
+                value = None
+
+            option.process(opt, value, values, self)
+
+            if stop:
+                break
+
+
+    # -- Feedback methods ----------------------------------------------
+
+    def get_prog_name (self):
+        if self.prog is None:
+            return os.path.basename(sys.argv[0])
+        else:
+            return self.prog
+
+    def error (self, msg):
+        """error(msg : string)
+
+        Print a usage message incorporating 'msg' to stderr and exit.
+        If you override this in a subclass, it should not return -- it
+        should either exit or raise an exception.
+        """
+        self.print_usage(sys.stderr)
+        sys.exit("%s: error: %s" % (self.get_prog_name(), msg))
+
+    def get_usage (self):
+        if self.usage:
+            return self.formatter.format_usage(
+                self.usage.replace("%prog", self.get_prog_name()))
+        else:
+            return ""
+
+    def print_usage (self, file=None):
+        """print_usage(file : file = stdout)
+
+        Print the usage message for the current program (self.usage) to
+        'file' (default stdout).  Any occurence of the string "%prog" in
+        self.usage is replaced with the name of the current program
+        (basename of sys.argv[0]).  Does nothing if self.usage is empty
+        or not defined.
+        """
+        if self.usage:
+            print >>file, self.get_usage()
+
+    def get_version (self):
+        if self.version:
+            return self.version.replace("%prog", self.get_prog_name())
+        else:
+            return ""
+
+    def print_version (self, file=None):
+        """print_version(file : file = stdout)
+
+        Print the version message for this program (self.version) to
+        'file' (default stdout).  As with print_usage(), any occurence
+        of "%prog" in self.version is replaced by the current program's
+        name.  Does nothing if self.version is empty or undefined.
+        """
+        if self.version:
+            print >>file, self.get_version()
+
+    def format_option_help (self, formatter=None):
+        if formatter is None:
+            formatter = self.formatter
+        formatter.store_option_strings(self)
+        result = []
+        result.append(formatter.format_heading("options"))
+        formatter.indent()
+        if self.option_list:
+            result.append(OptionContainer.format_option_help(self, formatter))
+            result.append("\n")
+        for group in self.option_groups:
+            result.append(group.format_help(formatter))
+            result.append("\n")
+        formatter.dedent()
+        # Drop the last "\n", or the header if no options or option groups:
+        return "".join(result[:-1])
+
+    def format_help (self, formatter=None):
+        if formatter is None:
+            formatter = self.formatter
+        result = []
+        if self.usage:
+            result.append(self.get_usage() + "\n")
+        if self.description:
+            result.append(self.format_description(formatter) + "\n")
+        result.append(self.format_option_help(formatter))
+        return "".join(result)
+
+    def print_help (self, file=None):
+        """print_help(file : file = stdout)
+
+        Print an extended help message, listing all options and any
+        help text provided with them, to 'file' (default stdout).
+        """
+        if file is None:
+            file = sys.stdout
+        file.write(self.format_help())
+
+# class OptionParser
+
+
+def _match_abbrev (s, wordmap):
+    """_match_abbrev(s : string, wordmap : {string : Option}) -> string
+
+    Return the string key in 'wordmap' for which 's' is an unambiguous
+    abbreviation.  If 's' is found to be ambiguous or doesn't match any of
+    'words', raise BadOptionError.
+    """
+    # Is there an exact match?
+    if wordmap.has_key(s):
+        return s
+    else:
+        # Isolate all words with s as a prefix.
+        possibilities = [word for word in wordmap.keys()
+                         if word.startswith(s)]
+        # No exact match, so there had better be just one possibility.
+        if len(possibilities) == 1:
+            return possibilities[0]
+        elif not possibilities:
+            raise BadOptionError("no such option: %s" % s)
+        else:
+            # More than one possible completion: ambiguous prefix.
+            raise BadOptionError("ambiguous option: %s (%s?)"
+                                 % (s, ", ".join(possibilities)))
+
+
+# Some day, there might be many Option classes.  As of Optik 1.3, the
+# preferred way to instantiate Options is indirectly, via make_option(),
+# which will become a factory function when there are many Option
+# classes.
+make_option = Option

Added: trunk/www/utils/helpers/docutils/roman.py
===================================================================
--- trunk/www/utils/helpers/docutils/roman.py   2004-03-07 00:58:55 UTC (rev 
5248)
+++ trunk/www/utils/helpers/docutils/roman.py   2004-03-07 06:27:44 UTC (rev 
5249)
@@ -0,0 +1,81 @@
+"""Convert to and from Roman numerals"""
+
+__author__ = "Mark Pilgrim (address@hidden)"
+__version__ = "1.4"
+__date__ = "8 August 2001"
+__copyright__ = """Copyright (c) 2001 Mark Pilgrim
+
+This program is part of "Dive Into Python", a free Python tutorial for
+experienced programmers.  Visit http://diveintopython.org/ for the
+latest version.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the Python 2.1.1 license, available at
+http://www.python.org/2.1.1/license.html
+"""
+
+import re
+
+#Define exceptions
+class RomanError(Exception): pass
+class OutOfRangeError(RomanError): pass
+class NotIntegerError(RomanError): pass
+class InvalidRomanNumeralError(RomanError): pass
+
+#Define digit mapping
+romanNumeralMap = (('M',  1000),
+                   ('CM', 900),
+                   ('D',  500),
+                   ('CD', 400),
+                   ('C',  100),
+                   ('XC', 90),
+                   ('L',  50),
+                   ('XL', 40),
+                   ('X',  10),
+                   ('IX', 9),
+                   ('V',  5),
+                   ('IV', 4),
+                   ('I',  1))
+
+def toRoman(n):
+    """convert integer to Roman numeral"""
+    if not (0 < n < 5000):
+        raise OutOfRangeError, "number out of range (must be 1..4999)"
+    if int(n) <> n:
+        raise NotIntegerError, "decimals can not be converted"
+
+    result = ""
+    for numeral, integer in romanNumeralMap:
+        while n >= integer:
+            result += numeral
+            n -= integer
+    return result
+
+#Define pattern to detect valid Roman numerals
+romanNumeralPattern = re.compile("""
+    ^                   # beginning of string
+    M{0,4}              # thousands - 0 to 4 M's
+    (CM|CD|D?C{0,3})    # hundreds - 900 (CM), 400 (CD), 0-300 (0 to 3 C's),
+                        #            or 500-800 (D, followed by 0 to 3 C's)
+    (XC|XL|L?X{0,3})    # tens - 90 (XC), 40 (XL), 0-30 (0 to 3 X's),
+                        #        or 50-80 (L, followed by 0 to 3 X's)
+    (IX|IV|V?I{0,3})    # ones - 9 (IX), 4 (IV), 0-3 (0 to 3 I's),
+                        #        or 5-8 (V, followed by 0 to 3 I's)
+    $                   # end of string
+    """ ,re.VERBOSE)
+
+def fromRoman(s):
+    """convert Roman numeral to integer"""
+    if not s:
+        raise InvalidRomanNumeralError, 'Input can not be blank'
+    if not romanNumeralPattern.search(s):
+        raise InvalidRomanNumeralError, 'Invalid Roman numeral: %s' % s
+
+    result = 0
+    index = 0
+    for numeral, integer in romanNumeralMap:
+        while s[index:index+len(numeral)] == numeral:
+            result += integer
+            index += len(numeral)
+    return result
+

Added: trunk/www/utils/helpers/docutils/textwrap.py
===================================================================
--- trunk/www/utils/helpers/docutils/textwrap.py        2004-03-07 00:58:55 UTC 
(rev 5248)
+++ trunk/www/utils/helpers/docutils/textwrap.py        2004-03-07 06:27:44 UTC 
(rev 5249)
@@ -0,0 +1,355 @@
+"""Text wrapping and filling.
+"""
+
+# Copyright (C) 1999-2001 Gregory P. Ward.
+# Copyright (C) 2002, 2003 Python Software Foundation.
+# Written by Greg Ward <address@hidden>
+
+__revision__ = "$Id: textwrap.py,v 1.1 2003/06/16 03:23:17 goodger Exp $"
+
+import string, re
+
+# Do the right thing with boolean values for all known Python versions
+# (so this module can be copied to projects that don't depend on Python
+# 2.3, e.g. Optik and Docutils).
+try:
+    True, False
+except NameError:
+    (True, False) = (1, 0)
+
+__all__ = ['TextWrapper', 'wrap', 'fill']
+
+# Hardcode the recognized whitespace characters to the US-ASCII
+# whitespace characters.  The main reason for doing this is that in
+# ISO-8859-1, 0xa0 is non-breaking whitespace, so in certain locales
+# that character winds up in string.whitespace.  Respecting
+# string.whitespace in those cases would 1) make textwrap treat 0xa0 the
+# same as any other whitespace char, which is clearly wrong (it's a
+# *non-breaking* space), 2) possibly cause problems with Unicode,
+# since 0xa0 is not in range(128).
+_whitespace = '\t\n\x0b\x0c\r '
+
+class TextWrapper:
+    """
+    Object for wrapping/filling text.  The public interface consists of
+    the wrap() and fill() methods; the other methods are just there for
+    subclasses to override in order to tweak the default behaviour.
+    If you want to completely replace the main wrapping algorithm,
+    you'll probably have to override _wrap_chunks().
+
+    Several instance attributes control various aspects of wrapping:
+      width (default: 70)
+        the maximum width of wrapped lines (unless break_long_words
+        is false)
+      initial_indent (default: "")
+        string that will be prepended to the first line of wrapped
+        output.  Counts towards the line's width.
+      subsequent_indent (default: "")
+        string that will be prepended to all lines save the first
+        of wrapped output; also counts towards each line's width.
+      expand_tabs (default: true)
+        Expand tabs in input text to spaces before further processing.
+        Each tab will become 1 .. 8 spaces, depending on its position in
+        its line.  If false, each tab is treated as a single character.
+      replace_whitespace (default: true)
+        Replace all whitespace characters in the input text by spaces
+        after tab expansion.  Note that if expand_tabs is false and
+        replace_whitespace is true, every tab will be converted to a
+        single space!
+      fix_sentence_endings (default: false)
+        Ensure that sentence-ending punctuation is always followed
+        by two spaces.  Off by default because the algorithm is
+        (unavoidably) imperfect.
+      break_long_words (default: true)
+        Break words longer than 'width'.  If false, those words will not
+        be broken, and some lines might be longer than 'width'.
+    """
+
+    whitespace_trans = string.maketrans(_whitespace, ' ' * len(_whitespace))
+
+    unicode_whitespace_trans = {}
+    uspace = ord(u' ')
+    for x in map(ord, _whitespace):
+        unicode_whitespace_trans[x] = uspace
+
+    # This funky little regex is just the trick for splitting
+    # text up into word-wrappable chunks.  E.g.
+    #   "Hello there -- you goof-ball, use the -b option!"
+    # splits into
+    #   Hello/ /there/ /--/ /you/ /goof-/ball,/ /use/ /the/ /-b/ /option!
+    # (after stripping out empty strings).
+    wordsep_re = re.compile(r'(\s+|'                  # any whitespace
+                            r'-*\w{2,}-(?=\w{2,})|'   # hyphenated words
+                            r'(?<=[\w\!\"\'\&\.\,\?])-{2,}(?=\w))')   # em-dash
+
+    # XXX will there be a locale-or-charset-aware version of
+    # string.lowercase in 2.3?
+    sentence_end_re = re.compile(r'[%s]'              # lowercase letter
+                                 r'[\.\!\?]'          # sentence-ending punct.
+                                 r'[\"\']?'           # optional end-of-quote
+                                 % string.lowercase)
+
+
+    def __init__ (self,
+                  width=70,
+                  initial_indent="",
+                  subsequent_indent="",
+                  expand_tabs=True,
+                  replace_whitespace=True,
+                  fix_sentence_endings=False,
+                  break_long_words=True):
+        self.width = width
+        self.initial_indent = initial_indent
+        self.subsequent_indent = subsequent_indent
+        self.expand_tabs = expand_tabs
+        self.replace_whitespace = replace_whitespace
+        self.fix_sentence_endings = fix_sentence_endings
+        self.break_long_words = break_long_words
+
+
+    # -- Private methods -----------------------------------------------
+    # (possibly useful for subclasses to override)
+
+    def _munge_whitespace(self, text):
+        """_munge_whitespace(text : string) -> string
+
+        Munge whitespace in text: expand tabs and convert all other
+        whitespace characters to spaces.  Eg. " foo\tbar\n\nbaz"
+        becomes " foo    bar  baz".
+        """
+        if self.expand_tabs:
+            text = text.expandtabs()
+        if self.replace_whitespace:
+            if isinstance(text, str):
+                text = text.translate(self.whitespace_trans)
+            elif isinstance(text, unicode):
+                text = text.translate(self.unicode_whitespace_trans)
+        return text
+
+
+    def _split(self, text):
+        """_split(text : string) -> [string]
+
+        Split the text to wrap into indivisible chunks.  Chunks are
+        not quite the same as words; see wrap_chunks() for full
+        details.  As an example, the text
+          Look, goof-ball -- use the -b option!
+        breaks into the following chunks:
+          'Look,', ' ', 'goof-', 'ball', ' ', '--', ' ',
+          'use', ' ', 'the', ' ', '-b', ' ', 'option!'
+        """
+        chunks = self.wordsep_re.split(text)
+        chunks = filter(None, chunks)
+        return chunks
+
+    def _fix_sentence_endings(self, chunks):
+        """_fix_sentence_endings(chunks : [string])
+
+        Correct for sentence endings buried in 'chunks'.  Eg. when the
+        original text contains "... foo.\nBar ...", munge_whitespace()
+        and split() will convert that to [..., "foo.", " ", "Bar", ...]
+        which has one too few spaces; this method simply changes the one
+        space to two.
+        """
+        i = 0
+        pat = self.sentence_end_re
+        while i < len(chunks)-1:
+            if chunks[i+1] == " " and pat.search(chunks[i]):
+                chunks[i+1] = "  "
+                i += 2
+            else:
+                i += 1
+
+    def _handle_long_word(self, chunks, cur_line, cur_len, width):
+        """_handle_long_word(chunks : [string],
+                             cur_line : [string],
+                             cur_len : int, width : int)
+
+        Handle a chunk of text (most likely a word, not whitespace) that
+        is too long to fit in any line.
+        """
+        space_left = width - cur_len
+
+        # If we're allowed to break long words, then do so: put as much
+        # of the next chunk onto the current line as will fit.
+        if self.break_long_words:
+            cur_line.append(chunks[0][0:space_left])
+            chunks[0] = chunks[0][space_left:]
+
+        # Otherwise, we have to preserve the long word intact.  Only add
+        # it to the current line if there's nothing already there --
+        # that minimizes how much we violate the width constraint.
+        elif not cur_line:
+            cur_line.append(chunks.pop(0))
+
+        # If we're not allowed to break long words, and there's already
+        # text on the current line, do nothing.  Next time through the
+        # main loop of _wrap_chunks(), we'll wind up here again, but
+        # cur_len will be zero, so the next line will be entirely
+        # devoted to the long word that we can't handle right now.
+
+    def _wrap_chunks(self, chunks):
+        """_wrap_chunks(chunks : [string]) -> [string]
+
+        Wrap a sequence of text chunks and return a list of lines of
+        length 'self.width' or less.  (If 'break_long_words' is false,
+        some lines may be longer than this.)  Chunks correspond roughly
+        to words and the whitespace between them: each chunk is
+        indivisible (modulo 'break_long_words'), but a line break can
+        come between any two chunks.  Chunks should not have internal
+        whitespace; ie. a chunk is either all whitespace or a "word".
+        Whitespace chunks will be removed from the beginning and end of
+        lines, but apart from that whitespace is preserved.
+        """
+        lines = []
+        if self.width <= 0:
+            raise ValueError("invalid width %r (must be > 0)" % self.width)
+
+        while chunks:
+
+            # Start the list of chunks that will make up the current line.
+            # cur_len is just the length of all the chunks in cur_line.
+            cur_line = []
+            cur_len = 0
+
+            # Figure out which static string will prefix this line.
+            if lines:
+                indent = self.subsequent_indent
+            else:
+                indent = self.initial_indent
+
+            # Maximum width for this line.
+            width = self.width - len(indent)
+
+            # First chunk on line is whitespace -- drop it, unless this
+            # is the very beginning of the text (ie. no lines started yet).
+            if chunks[0].strip() == '' and lines:
+                del chunks[0]
+
+            while chunks:
+                l = len(chunks[0])
+
+                # Can at least squeeze this chunk onto the current line.
+                if cur_len + l <= width:
+                    cur_line.append(chunks.pop(0))
+                    cur_len += l
+
+                # Nope, this line is full.
+                else:
+                    break
+
+            # The current line is full, and the next chunk is too big to
+            # fit on *any* line (not just this one).
+            if chunks and len(chunks[0]) > width:
+                self._handle_long_word(chunks, cur_line, cur_len, width)
+
+            # If the last chunk on this line is all whitespace, drop it.
+            if cur_line and cur_line[-1].strip() == '':
+                del cur_line[-1]
+
+            # Convert current line back to a string and store it in list
+            # of all lines (return value).
+            if cur_line:
+                lines.append(indent + ''.join(cur_line))
+
+        return lines
+
+
+    # -- Public interface ----------------------------------------------
+
+    def wrap(self, text):
+        """wrap(text : string) -> [string]
+
+        Reformat the single paragraph in 'text' so it fits in lines of
+        no more than 'self.width' columns, and return a list of wrapped
+        lines.  Tabs in 'text' are expanded with string.expandtabs(),
+        and all other whitespace characters (including newline) are
+        converted to space.
+        """
+        text = self._munge_whitespace(text)
+        indent = self.initial_indent
+        if len(text) + len(indent) <= self.width:
+            return [indent + text]
+        chunks = self._split(text)
+        if self.fix_sentence_endings:
+            self._fix_sentence_endings(chunks)
+        return self._wrap_chunks(chunks)
+
+    def fill(self, text):
+        """fill(text : string) -> string
+
+        Reformat the single paragraph in 'text' to fit in lines of no
+        more than 'self.width' columns, and return a new string
+        containing the entire wrapped paragraph.
+        """
+        return "\n".join(self.wrap(text))
+
+
+# -- Convenience interface ---------------------------------------------
+
+def wrap(text, width=70, **kwargs):
+    """Wrap a single paragraph of text, returning a list of wrapped lines.
+
+    Reformat the single paragraph in 'text' so it fits in lines of no
+    more than 'width' columns, and return a list of wrapped lines.  By
+    default, tabs in 'text' are expanded with string.expandtabs(), and
+    all other whitespace characters (including newline) are converted to
+    space.  See TextWrapper class for available keyword args to customize
+    wrapping behaviour.
+    """
+    w = TextWrapper(width=width, **kwargs)
+    return w.wrap(text)
+
+def fill(text, width=70, **kwargs):
+    """Fill a single paragraph of text, returning a new string.
+
+    Reformat the single paragraph in 'text' to fit in lines of no more
+    than 'width' columns, and return a new string containing the entire
+    wrapped paragraph.  As with wrap(), tabs are expanded and other
+    whitespace characters converted to space.  See TextWrapper class for
+    available keyword args to customize wrapping behaviour.
+    """
+    w = TextWrapper(width=width, **kwargs)
+    return w.fill(text)
+
+
+# -- Loosely related functionality -------------------------------------
+
+def dedent(text):
+    """dedent(text : string) -> string
+
+    Remove any whitespace than can be uniformly removed from the left
+    of every line in `text`.
+
+    This can be used e.g. to make triple-quoted strings line up with
+    the left edge of screen/whatever, while still presenting it in the
+    source code in indented form.
+
+    For example:
+
+        def test():
+            # end first line with \ to avoid the empty line!
+            s = '''\
+            hello
+              world
+            '''
+            print repr(s)          # prints '    hello\n      world\n    '
+            print repr(dedent(s))  # prints 'hello\n  world\n'
+    """
+    lines = text.expandtabs().split('\n')
+    margin = None
+    for line in lines:
+        content = line.lstrip()
+        if not content:
+            continue
+        indent = len(line) - len(content)
+        if margin is None:
+            margin = indent
+        else:
+            margin = min(margin, indent)
+
+    if margin is not None and margin > 0:
+        for i in range(len(lines)):
+            lines[i] = lines[i][margin:]
+
+    return '\n'.join(lines)

Modified: trunk/www/utils/helpers/files.py
===================================================================
--- trunk/www/utils/helpers/files.py    2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/files.py    2004-03-07 06:27:44 UTC (rev 5249)
@@ -1,9 +1,84 @@
-import os
+import sys, os, string
 
 SVN_BASE=os.path.abspath(os.path.join(os.path.dirname(__file__),'../../..'))
 
 def openModuleFile(module,file):
+  """
+  Open a file in the gnue-<module>/ directory.
+  Returns None if file doesn't exist.
+
+  Note: First tries to read the <module>.<file> directory
+  from the www/releases/release-files/ folder.
+  """
   try:
-    return open (os.path.join(SVN_BASE,'gnue-%s' % module.lower(), file))
+    return open (os.path.join(SVN_BASE,'www','releases',
+                 'release-files',module.lower()+'.' + file))
   except IOError:
-    return None
+    try:
+      return open (os.path.join(SVN_BASE,'gnue-%s' % module.lower(), file))
+    except IOError:
+      return None
+
+
+def importModule(module):
+  """
+  Import the src/ directory of one of the gnue tools.
+  Useful for getting data from __init__.py
+  """
+  oldpath = sys.path[:]
+  sys.path.insert(0,os.path.join(SVN_BASE,'gnue-%s' % module))
+  import src
+  sys.path = oldpath
+  sys.modules.pop('src')
+  return src
+
+
+class SubheadedFile:
+  """
+  Reads in a file that is formatted with
+  dashed headers. E.g., :
+
+  Foo
+  ---
+  Text here
+
+  Bar
+  ---
+  More text
+  """
+
+  def __init__(self, input):
+    buffer = []
+    self.sections = {None:buffer}
+    self.sectionOrder = []
+    lines = input.readlines() + ["","",""]  # So we don't get index errors
+    i = 0
+    while i < len(lines)-3:
+      line = lines[i].rstrip()
+      if (not line or i == 0) and lines[i+1].rstrip()[:1] and lines[i+2][:3] 
== '---':
+        buffer = []
+        section = lines[i+1].rstrip()
+        self.sectionOrder.append(section)
+        self.sections[section.upper()] = buffer
+        i += 3
+      else:
+        buffer.append(line)
+        i += 1
+
+
+  def asText(self, section):
+    return string.join(self.sections[section.upper()],'\n').strip()
+
+  def asHTML(self, section):
+    return '<p>' + self.asText(section).replace('\n\n','</p>\n<p>') + '</p>'
+
+  def firstParaAsHTML(self, section):
+    text = 
string.join(self.sections[section.upper()],'\n').strip().split('\n\n')[0]
+    return '<p>' + text + '</p>'
+
+
+def addToolLinks(html, links, base, exclude=""):
+  for name, tool in links.items():
+    if tool != exclude:
+      html = html.replace(name,'<a href="%s/%s/">%s</a>' % (base, tool, name))
+  return html
\ No newline at end of file

Modified: trunk/www/utils/helpers/tools.py
===================================================================
--- trunk/www/utils/helpers/tools.py    2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/utils/helpers/tools.py    2004-03-07 06:27:44 UTC (rev 5249)
@@ -1,43 +1,150 @@
-from files import openModuleFile
+import sys, os, string, time
+from files import openModuleFile, importModule, SubheadedFile, SVN_BASE
+from StringIO import StringIO
 
-
 ##
 ##
 ##
 class Tool:
   def __init__(self, tool):
-    self.version = None # TODO
     self.faq = FAQ(tool)
+    self.releases = NEWS(tool)
+    self.readme = README(tool)
+    self.install = INSTALL(tool)
 
+    # Import the src/ module, if possible
+    try:
+      self.imported = importModule(tool)
+      try:
+        self.name = self.imported.TITLE
+        self.package = self.imported.PACKAGE
+      except AttributeError:
+        print "WARNING: gnue-%s/__init__.py does not define 'TITLE'" % (tool)
+        self.name = tool
+    except ImportError:
+      print "WARNING: cannot import gnue-%s/src" % (tool)
+      self.imported = None
+      self.name = tool
 
+    # Check for COPYING file
+    if not os.path.isfile(os.path.join(SVN_BASE,'gnue-'+tool,'COPYING')):
+      print "WARNING: gnue-%s has no COPYING file" % tool
 
 
+
+######################################################################
 ##
 ##
-##
-class FAQ:
+class NEWS:
   """
-  Reads in a project
+  Reads in a project's NEWS file to get release info
   """
 
   def __init__(self, tool):
-    file = openModuleFile(tool,'FAQ')
+    file = openModuleFile(tool,'NEWS')
+    self.tool = tool
     if not file:
-      self.text = ""
+      print "WARNING: gnue-%s has no NEWS file" % (tool)
+      self.lines = ()
     else:
-      self.text = file.read()
+      self.lines = file.readlines()
+      file.close()
 
-    file.close()
+    self.releases = {}
+    self.releaseOrder = []
+    buffer = []
+    for line in self.lines:
+      line = line.rstrip()
+      if line[:20] == 'New features/changes':
+        version, date = line[32:].split(':',1)
+        date = date.strip()[1:11]
+        buffer = []
+        # Process the release time
+        try:
+          date = time.strptime(date,'%Y-%m-%d')
+        except:
+          print "Ignoring %s release %s since it doesn't have a valid release 
date." % (self.tool, version)
+        else:
+          self.releases[version] = (date, buffer)
+          self.releaseOrder.append(version)
+      else:
+        buffer.append(line)
 
   ##
   #
-  def asHTML(self):
-  """
-  Return the FAQ formatted for HTML. Does not include html header or body tags
-  """
-    pass
+  def getCurrentRelease(self):
+    try:
+      return self.releaseOrder[0]
+    except IndexError:
+      return None
 
+
+  def getReleaseDate(self, release, format="%Y-%m-%d"):
+    return time.strftime(format, self.releases[release][0])
+
   ##
   #
-  def asText(self):
-    return self.text
+  def asHTML(self, release):
+    # TODO: This is a quick cheat :)
+    return "<pre>" + self.asText(release) + "</pre>"
+
+  ##
+  #
+  def asText(self, release):
+    return string.join(self.releases[release][1],'\n').strip()
+
+
+######################################################################
+##
+##
+class FAQ(SubheadedFile):
+  def __init__(self, tool):
+    file = openModuleFile(tool,'FAQ')
+    if not file:
+      print "WARNING: gnue-%s has no FAQ file" % (tool)
+      file = StringIO()
+
+    SubheadedFile.__init__(self, file)
+    file.close()
+
+    # Check for missing sections
+    for section in ('GENERAL',):
+      if not self.sections.has_key(section):
+        print "WARNING: gnue-%s/FAQ has no %s section" % (tool, section)
+
+
+######################################################################
+##
+##
+class README(SubheadedFile):
+  def __init__(self, tool):
+    file = openModuleFile(tool,'README')
+    if not file:
+      print "WARNING: gnue-%s has no README file" % (tool)
+      file = StringIO()
+
+    SubheadedFile.__init__(self, file)
+    file.close()
+
+    # Check for missing sections
+    for section in ('LICENSE','INTRODUCTION','INSTALLATION'):
+      if not self.sections.has_key(section):
+        print "WARNING: gnue-%s/README has no %s section" % (tool, section)
+
+######################################################################
+##
+##
+class INSTALL(SubheadedFile):
+  def __init__(self, tool):
+    file = openModuleFile(tool,'INSTALL')
+    if not file:
+      print "WARNING: gnue-%s has no INSTALL file" % (tool)
+      file = StringIO()
+
+    SubheadedFile.__init__(self, file)
+    file.close()
+
+    # Check for missing sections
+    for section in ['REQUIREMENTS']:
+      if not self.sections.has_key(section):
+        print "WARNING: gnue-%s/INSTALL has no %s section" % (tool, section)

Added: trunk/www/web/developers/section.css
===================================================================
--- trunk/www/web/developers/section.css        2004-03-07 00:58:55 UTC (rev 
5248)
+++ trunk/www/web/developers/section.css        2004-03-07 06:27:44 UTC (rev 
5249)
@@ -0,0 +1,3 @@
+#container {
+  background: url(../images/b_main.png) no-repeat top left;
+}

Added: trunk/www/web/images/b_dev.png
===================================================================
(Binary files differ)


Property changes on: trunk/www/web/images/b_dev.png
___________________________________________________________________
Name: svn:mime-type
   + application/octet-stream

Added: trunk/www/web/images/b_main.png
===================================================================
(Binary files differ)


Property changes on: trunk/www/web/images/b_main.png
___________________________________________________________________
Name: svn:mime-type
   + application/octet-stream

Added: trunk/www/web/images/b_packages.png
===================================================================
(Binary files differ)


Property changes on: trunk/www/web/images/b_packages.png
___________________________________________________________________
Name: svn:mime-type
   + application/octet-stream

Added: trunk/www/web/images/b_tools.png
===================================================================
(Binary files differ)


Property changes on: trunk/www/web/images/b_tools.png
___________________________________________________________________
Name: svn:mime-type
   + application/octet-stream

Added: trunk/www/web/images/b_wiki.png
===================================================================
(Binary files differ)


Property changes on: trunk/www/web/images/b_wiki.png
___________________________________________________________________
Name: svn:mime-type
   + application/octet-stream

Added: trunk/www/web/packages/_module_menu.php
===================================================================
--- trunk/www/web/packages/_module_menu.php     2004-03-07 00:58:55 UTC (rev 
5248)
+++ trunk/www/web/packages/_module_menu.php     2004-03-07 06:27:44 UTC (rev 
5249)
@@ -0,0 +1,11 @@
+     <h3><a href="<?php print "$BASEDIR"; ?>/tools/">ERP Packages</a></h3>
+      <ul>
+        <li><a href="<?php print "$BASEDIR"; ?>/packages/financials.html" 
>Financials</a></li>
+        <li><a href="<?php print "$BASEDIR"; ?>/packages/crm.html" >Customer 
Relations</a></li>
+        <li><a href="<?php print "$BASEDIR"; ?>/packages/hr.html" >Human 
Resources</a></li>
+        <li><a href="<?php print "$BASEDIR"; ?>/packages/manuf.html" 
>Manufacturing</a></li>
+        <li><a href="<?php print "$BASEDIR"; ?>/packages/sales.html" 
>Sales</a></li>
+        <li><a href="<?php print "$BASEDIR"; ?>/packages/scm.html" >Supply 
Chain</a></li>
+        <li><a href="<?php print "$BASEDIR"; ?>/packages/dcl.html" 
>DCL</a></li>
+        <li><a href="<?php print "$BASEDIR"; ?>/packages/packages.html" 
>More<?php print "$BASEDIR"; ?>.</a></li>
+      </ul>

Added: trunk/www/web/packages/section.css
===================================================================
--- trunk/www/web/packages/section.css  2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/web/packages/section.css  2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,3 @@
+#container {
+  background: url(../images/b_packages.png) no-repeat top left;
+}

Added: trunk/www/web/project/index.php
===================================================================
--- trunk/www/web/project/index.php     2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/web/project/index.php     2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,128 @@
+<?php $BASEDIR="..";
+      $MODULE="project";
+      include "$BASEDIR/shared/_header.php"; ?>
+
+<div id="body">
+
+
+      <!-- begin content -->
+    <h1>GNUe: An Overview</h1>
+
+      <p>GNU Enterprise (GNUe) is a meta-project which is part of the
+      overall <a href="http://www.gnu.org/";>GNU</a> Project. GNUe's goal
+      is to develop enterprise-class data-aware applications as Free
+      software.
+      GNUe is itself comprised of several subprojects:</p>
+
+    <h2>The Tools</h2>
+      <p>
+      Firstly, GNUe is a set of <a href="../tools/">tools</a>, such as a
+      data-aware user forms interface, a reporting system and an
+      application server, which provide a development framework for
+      enterprise information technology professionals to write or
+      customise data-aware applications and deploy them effectively
+      across large or small organisations. The GNUe platform boasts an
+      <a href="glossary.html#open_architecture">open architecture</a> and
+      easy maintenance. It gives users a modular system
+      and freedom from being stuck with a single-source vendor.
+      GNUe supports multi-language interfaces and non-ASCII character sets.
+      </p>
+
+    <h2>The Packages</h2>
+      <p>
+      GNUe is also a set of <a href="../packages/">packages</a> written
+      using the tools, to implement a full Enterprise Resource Planning
+      (<a href="glossary.html#erp">ERP</a>) system. From human resources,
+      accounting, customer relationship management and project management
+      to supply chain or e-commerce, GNUe can handle the needs of any
+      business, large or small. GNUe supports multi-currency processing
+      (including euro support).
+      </p>
+      <p>Note: Packages are not as far along in the development cycle as the 
<a href="../tools/">tools</a>. Most are still in the planning stages. </p>
+
+    <h2>The Community</h2>
+      <p>
+      A general <a href="../community/">community</a> of
+      support and resources for developers writing applications using the
+      GNUe Tools (whether part of the 'official' GNUe Packages or not).
+      It is designed to collect Enterprise software for the GNU system in
+      a single location (much like the <a href="http://www.gnome.org";>GNOME</a>
+      project collects Desktop software).
+      GNUe is a <a href="glossary.html#free_software">Free Software</a> project
+      (released under the <a 
href="http://www.gnu.org/licenses/gpl-faq.html";>GNU
+      General Public License</a>) with
+      a corps of volunteer developers around the world working on GNUe
+      projects. This provides the added benefits of easy
+      internationalization of applications.
+      The project is working to provide a worldwide GNUe community,
+      allowing everyone who is involved in the project access to other
+      talented business information technology professionals.
+      </p>
+
+
+</div>
+
+  <!--
+  <div id="newsItems">
+    <h1 class="newsItem">GNUe News</h1>
+
+    <div class="newsItem">
+        <h2 class="newsItem">New Releases of GNUe Tools (0.5.2)</h2>
+
+        <span class="newsDate">22 October 2003</span>
+
+        <p class="newsItem">
+        The GNU Enterprise team is proud to announce a new release of
+        it's enterprise application development suite.  This release
+        includes:
+        <ul>
+            <li> GNUe Forms 0.5.2
+            <li> GNUe Reports 0.1.3
+            <li> GNUe Designer 0.5.2
+            <li> GNUe Navigator 0.0.6
+            <li> GNUe AppServer 0.0.5
+            <li> GNUe Common 0.5.2
+        </ul>
+
+        <a href="../news/news136.html">More ...</a></p>
+        </p>
+    </div>
+
+    <div class="newsItem">
+        <h2 class="newsItem">New Releases of GNUe Tools (0.5.2)</h2>
+
+        <span class="newsDate">22 October 2003</span>
+
+        <p class="newsItem">
+        The GNU Enterprise team is proud to announce a new release of
+        it's enterprise application development suite.  This release
+        includes:
+        <ul>
+            <li> GNUe Forms 0.5.2
+            <li> GNUe Reports 0.1.3
+            <li> GNUe Designer 0.5.2
+            <li> GNUe Navigator 0.0.6
+            <li> GNUe AppServer 0.0.5
+            <li> GNUe Common 0.5.2
+        </ul>
+
+        <a href="../news/news136.html">More ...</a></p>
+        </p>
+    </div>
+
+  </div>
+  -->
+
+  <!-- <div id="copyrightFooter">
+    <p>
+     News items are the property of their posters.
+     All the rest &copy; 1999, 2001 - 2004 by Free Software Foundation, Inc.,
+     59 Temple Place - Suite 330, Boston, MA 02111, USA <br>
+
+     Verbatim copying and distribution of this entire article is permitted in 
any
+     medium, provided this notice is preserved.
+    </p>
+  </div>  -->
+
+
+<?php include "$BASEDIR/shared/_footer.php"; ?>

Added: trunk/www/web/project/section.css
===================================================================
--- trunk/www/web/project/section.css   2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/web/project/section.css   2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,3 @@
+#container {
+  background: url(../images/b_main.png) no-repeat top left;
+}

Added: trunk/www/web/shared/_footer.php
===================================================================
--- trunk/www/web/shared/_footer.php    2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/web/shared/_footer.php    2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,5 @@
+<?php include "$BASEDIR/shared/_menu.php" ?>
+<br/>
+</div>
+</body>
+</html>

Added: trunk/www/web/shared/_header.php
===================================================================
--- trunk/www/web/shared/_header.php    2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/web/shared/_header.php    2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,28 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
+  "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd";>
+<html xmlns="http://www.w3.org/1999/xhtml"; xml:lang="en" >
+<head>
+        <meta http-equiv="content-type" content="text/html; 
charset=iso-8859-1" />
+        <meta name="author" content="GNU Enterprise" />
+        <meta NAME="COPYRIGHT" CONTENT="Copyright (c) 2001-2004 Free Software 
Foundation">
+        <meta NAME="KEYWORDS" CONTENT="Free Software, FreeSoftware, 
Freesoftware, free software, GNU, gnu, GPL, gpl, Unix, UNIX, *nix, unix, MySQL, 
mysql, SQL, sql, Database, DataBase, database, gnue, enterprise software, 
corba, supply chain, accounting, erp, crm,E-Commerce, GNU 
Enterprise,Application Service Providers, Business 2 Business, Customer 
Relationship Managment, Enterprise Application Integration, eai, E-Commerce, 
Middleware, postgresql, AP, AR, GL, xml, CORBA ">
+<META NAME="DESCRIPTION" CONTENT="Free software for your business">
+        <meta name="robots" content="all" />
+
+        <title>GNU Enterprise</title>
+
+        <!-- to correct the unsightly Flash of Unstyled Content. 
http://www.bluerobot.com/web/css/fouc.asp -->
+        <script type="text/javascript"></script>
+
+        <style type="text/css" media="all">
+                @import "<?php print "$BASEDIR/shared/base.css"; ?>";
+                @import "<?php print "$BASEDIR/$MODULE/section.css"; ?>";
+        </style>
+
+</head>
+
+<!-- header table -->
+
+<body onload="window.defaultStatus='Welcome to GNU Enterprise';" id="css-gnue">
+
+<div id="container">

Added: trunk/www/web/shared/_menu.php
===================================================================
--- trunk/www/web/shared/_menu.php      2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/web/shared/_menu.php      2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,28 @@
+  <div id="leftMenuList">
+    <h3><a href="<?php print "$BASEDIR"; ?>/project/" >GNUe Home</a></h3>
+
+      <ul>
+        <li><a href="<?php print "$BASEDIR"; ?>/project/news/" >News</a><br>
+        <li><a href="http://www.gnuenterprise.org/downloads/downloads.php"; 
>Downloads</a><br>
+        <li><a href="<?php print "$BASEDIR"; ?>/docs/" >Documentation</a><br>
+        <li><a href="<?php print "$BASEDIR"; ?>/project/screenshots.html" 
>Screenshots</a><br>
+        <li><a href="<?php print "$BASEDIR"; ?>/project/bugs.html" >Bug 
Tracking</a></li>
+        <li><a href="<?php print "$BASEDIR"; ?>/project/media.html" >Media 
Resources</a><br>
+        <li><a href="<?php print "$BASEDIR"; ?>/developers/" >Developer's 
Corner</a></li>
+        <li><a href="<?php print "$BASEDIR"; ?>/project/involve.html" >Get 
Involved!</a></li>
+      </ul>
+
+    <?php @include("$BASEDIR/$MODULE/_module_menu.php")?>
+
+
+    <h3><a href="<?php print "$BASEDIR"; ?>/project/" >GNUe Projects</a></h3>
+      <ul>
+        <li><a href="<?php print "$BASEDIR"; ?>/tools/" >Developer 
Tools</a></li>
+        <li><a href="<?php print "$BASEDIR"; ?>/packages/" >ERP 
Packages</a></li>
+        <li><a href="<?php print "$BASEDIR"; ?>/contrib/" >Other 
Projects</a></li>
+      </ul>
+
+    <?php @include("_extra_menu.php") ?>
+
+  </div>
+

Added: trunk/www/web/shared/base.css
===================================================================
--- trunk/www/web/shared/base.css       2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/web/shared/base.css       2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,146 @@
+
+.productGraphic {
+  float: right;
+  margin: 0 0 10px 10px;
+  border: 1px solid #666;
+}
+
+.latestRelease {
+  font-size: 9px;
+  font-style: italic;
+}
+
+body {
+  background: #fff url(../images/bg.png) no-repeat fixed top left;
+  font-family: verdana, helvetica, arial, sans-serif;
+  margin: 0 20px 0 20px;
+  color: #333366;
+}
+
+#container {
+  position: relative;
+  margin: 0 auto -1px auto;
+  padding-top: 144px;
+  width: 710px;
+/*  border-bottom: 5px solid #09345f; */
+}
+
+#leftMenuList {
+  position: absolute;
+  width: 150px;
+  top: 130px;
+  left: 5px;
+  font-size: 9px;
+  color: #333366;
+}
+
+#leftMenuList h3 {
+  width: 100%;
+  font-size: 12px;
+  border-width: thin;
+  border-color: #0099ff;
+  border-style: dashed none dashed none;
+  margin: 12px 0 0 0px;
+}
+
+#leftMenuList a {
+  color: #333366;
+  font-size: 8px;
+  text-decoration: none;
+}
+
+#leftMenuList h3 a {
+  c2olor: #99CCFF;
+  color: #0099ff;
+}
+
+#leftMenuList h4 {
+  width: 100%;
+  font-size: 12px;
+  font-weight: bold;
+  margin: 6px 0 0 0px;
+}
+
+#leftMenuList ul {
+  font-size: 9px;
+  list-style:none;
+  margin: 5px 0 0 0;
+  padding: 0 0 0 0;
+}
+
+#leftMenuList a:hover {
+  text-decoration: underline;
+}
+
+#newsItems {
+  position: absolute;
+  width: 150px;
+  top: 130px;
+  left: 560px;
+  font-size: 9px;
+}
+
+#body {
+  margin: 0px 0 0 176px;
+  width: 529px;
+  color: #000000;
+}
+
+#body h1,h2 {
+  font-weight: bold;
+  color: #333366;
+/*  background-color: #fcfcfc; */
+  border-width: thin;
+  border-style: solid none dashed none;
+  margin: 0 0 0 0;
+  padding: 0 0 0 0 ;
+}
+
+#body h1 {
+  width: 100%;
+  font-size: 18px;
+  background-color: #f9f9f9;
+}
+
+#body h2 {
+  font-size: 14px;
+  margin: 12px 0 0 0;
+  border-style: none none dotted none;
+}
+
+#body h2 a {
+  font-weight: bold;
+  color: #333366;
+  text-decoration: none;
+}
+
+#body h2 a:hover {
+  font-weight: bold;
+  color: #333366;
+  text-decoration: underline;
+}
+
+ul,li,ol {
+  font-size: 11px;
+}
+
+p {
+  font-size: 12px;
+  margin-top: 4px;
+  margin-bottom: 12px;
+}
+
+p.warn {
+  font-size: 11px;
+  font-style: italic;
+  background-color: #ffffcc;
+  margin-left: 12px;
+  margin-right: 12px;
+}
+
+pre {
+  font-size: 9px;
+  margin-top: 4px;
+  margin-bottom: 12px;
+  margin-left: 12px;
+}
\ No newline at end of file

Added: trunk/www/web/tools/forms/product.png
===================================================================
(Binary files differ)


Property changes on: trunk/www/web/tools/forms/product.png
___________________________________________________________________
Name: svn:mime-type
   + application/octet-stream

Added: trunk/www/web/tools/section.css
===================================================================
--- trunk/www/web/tools/section.css     2004-03-07 00:58:55 UTC (rev 5248)
+++ trunk/www/web/tools/section.css     2004-03-07 06:27:44 UTC (rev 5249)
@@ -0,0 +1,3 @@
+#container {
+  background: url(../images/b_tools.png) no-repeat top left;
+}





reply via email to

[Prev in Thread] Current Thread [Next in Thread]