[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Lout Question

From: Terrence M. Brannon
Subject: Lout Question
Date: Mon, 16 Oct 1995 11:34:32 -0700

Hi Jeff, I don't think the mailing list is working. I will CC this to
the list as proof of concept. But in the meantime, I get the following

hyla.usc.edu:/export/home/hyla-01/brannon/rs lout nsl_vs_pdp.lt > x
lout file "nsl_vs_pdp":
    16,1: no @SectionList precedes this @SectionList&&preceding

With the following file:

@SysInclude { tab }
@SysInclude { creport }
        @Title { Evaluation of the PDP++ Neural Network Simulator }
        @Author { Terrence Monroe Brannon (brannon"@"rana.usc.edu) }
        @Institution { University of Southern Calfornia
Department of Neuroscience
Los Angeles, CA 90089-2520
(213) 740-6995
        @DateLine { Yes }
        @CoverSheet { No }

@Section @Title { Initial Impressions }

It has taken me fully 3-4 hours to run a simple backprop program in
PDP++. The instructions are clear but the environment is huge and
menu-driven as opposed to small, tight and compiler driven.


@SubSection @Title { Size of gzipped tar file }

@Verbatim { 
-rw-r--r--  1 brannon  12476101 Oct  2 15:27 pdp++_1.01_bin_SUN4.tar.gz
-rw-------  1 brannon   9697280 Oct  2 20:09 pdp++_1.01_ext.tar
There doesn't appear to be much support for small problems. I sent
e-mail to the pdpadmin because I had a warning message when opening up
the backpropagation example, but I never got a reply.
@End @SubSection

@End @Section

@Section @Title { Sat Oct  7 17:04:31 PDT 1995 }
I followed the instructions on how to create the 4-2-4 encoder
example. For some reason, none of my graphics worked. I didn't know
what do next:
@ListItem @I    {I couldn't call a help support line. }
@ListItem @I    {I couldn't e-mail these people. In all fairness Randall
O'Reilly did answer my personal e-mail about what documentation system
they were using. }
@ListItem @I    {I didn't see anything about it the FAQ.}
Of course, their example of the same thing ran without a hitch.

My gut feeling after this episode:

@ListItem @I {
This sort of thing would have never happened in NSL. If 
I have a problem with NSL, the architecture is simple 
enough that I can go in and examine the code directly.

@ListItem @I {
The connection between graphics and 
network layer elements (units, vectors, arrays) is much
more complex in PDP++. 

@ListItem @I {
However extensibility of the PDP++ framework allows for
automatic customizability --- whenever new C++ code is
written using PDP++ conventions, a user interface is
automatically generated for it. From


@IndentedDisplay 10p @Font {
Below the Action region, in the lower left corner of the NetView is
the Member Region. This region contains a vertical scrollbox of
buttons which are labeled with the names of the member fields of the
units and their connections in the NetView. 
@IndentedDisplay 10p @Font {
Most of the graphical interface (i.e., edit dialogs, menus, etc) is
generated automatically from the information provided by
TypeAccess. The same is true for the way you can transparently access
hard-coded types and objects through CSS. Thus, you don't need to do
anything special to be able to use your newly defined classes exactly
in the way that you use the ones that come with the software.

Note that this says @I { most }. They don't really elaborate as to
what is and what isn't handled, here or elsewhere in the manual. I
have the feeling that you are supposed to be content with the models
they developed. And those models are excellent for those paradigms,
but the second you want to become more detailed in your own special
way, you have to deal with programming in the PDP++ API which is not
to appealing a prospect mainly because of the special and complex
nature of the Makefile which is required to automatically script your
new C++ code. 

@End @Section

@Section @Title { Remarkable Features, Good, Bad, Weird, and Other }


@SubSection @Title { Networks"/"Libraries Included }


@ListItem { Backpropagation (feedforward and recurrent) }
@ListItem { Constraint Satisfaction }
@ListItem {
Self-organizing learning. The PDP++ implementation of
self-organizing learning, called So, includes competitive learning and
several variations of it, including "soft" competitive learning
Nowlan, 1990, which replaces the "hard" competition of standard
competitive learning with a more graded activation function. Also
included are a couple of different types of modified Hebbian learning
rules that can be used with either hard or soft activation functions. 


@End @SubSection

@SubSection @Title { Object Hierarchy }

PDP++ seems to think that the only use of mathematics is statistics:
in other words, it automates training processes(objects) to propagate
their value to statistics, but doesn't really make it easy to say, sum
a layer of activity and use it in a differential equation. This is why
I gave up on trying to implement Didday.
@IncludeGraphic pdp.hier.eps
Projections specify the broad patterns of connectivity between
layers. Connections are the actual unit-to-unit weights and other
parameters which actually implement this connectivity. Thus, there is
always a projection associated with a set of connections.  

@End @SubSection

@SubSection @Title { Misc }

PDP++ can stop a simulation once a criterion is reached. 
PDP++ can save a network after training and then load it in for
PDP++ can remove higher processes without deleting the lower ones by
setting {Courier Base} @Font {sub_proc_type} to NULL before
deleting. This is a potentially nice feature.
PDP++ has interesting twist, but I think it should be more general.
It is possible to determine which event is closest to the output my
network actually produced.
PDP++ has a contorted, menu-driven way of interleaving training with
testing for cross-validation:
@Verbatim {
How do I create a cross-validation setup? 
    Cross-validation is accomplished by periodically testing during
    training. Thus, you simply need to create a testing process hierarchy,
    (at the Epoch level), and link your testing epoch process into the
    loop_procs of the training process. You should set the modulo factor
    (in the mod field of the process) of your testing epoch process to
    reflect the frequency with which you want to perform testing.   

@End @SubSection

@SubSection @Title { Truly compositional neurosimulation }
Due to the fine-grained separation of functionality into objects, the
following is possible in PDP++:
The SyncEpochProc runs two different sub-processes through the same
set of events from a common environment. Thus, it can be used to train
two different networks, even networks that use different algorithms,
at the same time. It essentially just adds a set of pointers to a
second_network and a second_proc_type and second_proc, which identify
the second branch of the processes to run, and which network to run
them on.  
To do this in NSL is not very difficult: one would just copy the copy
from simulation into another. This is the only way to get it done
because NSL is designed for the entire simulation to be in one
file. If the NSL_MODULE command were not a macro but instead a
function, then it would be equivalently easy in NSL to compose what
(based on what the manual says) is straightforward in PDP++.
The file {Courier Base} @Font {demo"/"bridge"/"README} brings up some
interesting issues in linking together two networks.

@End @SubSection

@SubSection @Title { PDP++'s Spec Objects }
I quote:
@IndentedDisplay 10p @Font {
One of the important design considerations for the PDP++ software was
the idea that one should separate state variables from specifications
and parameters (see Section Separation of State from Specification
Variables). The attributes of an object can often be divided into two
types --- the first type of attributes represent the object's
state. These change over time and are usually distinct within each
instance of an object. The second group of attributes represent
parameters of an object that tend to remain fixed over time, and often
have values that are common among instances of the object class. This
second group can be thought of as a specification for the object
class.  For example: The car object class might have two attributes:
color, and current-speed. The color attribute would be set when the
car was built, and would (hopefully) not be changing very much. It
would be classified as a specification parameter. The current-speed
attribute is likely to be constantly changing as the car accelerates
and decelerates. It is representative of the car's current state and
would be classified in the first group, the car's state space. If you
took a group of cars, chances are that some of them would share the
same color, but they would probably be moving around at different
speeds. Rather than have each car carry around an attribute for its
color, the specification attributes are split off from the car and put
into a special class called a car-specification or carspec. In this
way cars with identical colors can share the same specification, while
still having their own state attributes like current-speed. By
changing the color attribute in the specification, all the cars
sharing that specification would have their color changed. This allows
easy access to common parameters of an object class in one
location. Rather than individually setting the color parameter for
each instance of a car, the attribute can be set in just once in the
I personally can't see how this can't be done with a class variable 
@FootNote { whose value is shared throughout instances of the class } 
and instance variables... 
@FootNote {whose value is personal for each instance of the class }
or in C++ terminology, static class variables
and non-static class variables.

@End @SubSection


@End @Section

@Section @Title { Concluding Impressions }
PDP++ has super graphics. Incredible even. The biggest problem (and
this is a very serious problem) with the
setup is that I don't have the vaguest idea if I have all the tons of
parameters specified correctly and further if these parameters are
being sent to the right functions and in the right order. In short,
the control flow of the neurosimulator is not manifest to the user. In
contrast, NSL's control is highly manifest --- the behavior 
of INIT_MODULES, MODULES, and RUN_MODULES is clearly stated in the
manual. Any interaction between the functions obeys the C++ calling
This post from the PDP++ mailing list adequately summarizes the state
of the user of PDP++:
@FootNote { Outside of Carnegie Mellon :-). }
@IndentedDisplay 10p @Font {
From: "Marshall R. Mayberry" <address@hidden>
To: address@hidden
Subject: Constrain Satisfaction using PDP++
Date: Thu, 12 Oct 1995 03:39:45 -0500

Try as I might, I cannot seem to configure the software to solve a
simple cs problem using the Hopfield or Boltzmann paradigms.  There are
so many bells and whistles that, even with the aid of the manual, it
is difficult to know which parameters are relevant and which are not.
Even a basic example which does not involve learning would be useful.

I'll include here a statement of the problem, which I want to set up
as a class assignment (I've already managed to set up PDP++ to do
digit recognition using backprop), and a listing of the files which were 
used to solve the problem using the old PDP software.  Any help on translating
the old files into steps to take to set PDP++ up to do the same thing
would be greatly appreciated.

Problem Statement:
Recurrent networks of the Hopfield and Boltzmann type can be used to
perform constraint satisfaction.  The units stand for hypotheses,
positive links between them indicate support and negative links
conflict. Unit with activity 1 indicates that the hypothesis is active,
0 = inactive.  Starting from a random initial configuration of
hypothesis, the network settles into a configuration which satisfies the

The map coloring problem: There are six countries on the map (A,
B, C, D, E and F, see figure below). You have to color the map using
three colors (Red, Green, and Blue) so that neighboring countries have
different colors.

@Verbatim {
 \     C     /
  \         /
F  | A | B |  D
  /         \
 /     E     \



Part I: Design a constraint satisfaction network for the map
coloring problem. Each node stands for a particular color for a
particular country, e.g. A_is_Red. A weight linking two hypotheses (i.e.
two color assignments) should be negative if the color assignments are
incompatible, and non-negative if they are compatible.

Part II: Perform a statistical analysis on how reliably the
network solves the coloring problem. Try about 20 different random
initial configurations, and collect statistics on the results, i.e. how
often the network finds a valid solution, and the different ways it
fails. You may want to go back and modify the weight assignments and
thresholds to improve performance.

Part III: Repeat the statistical analysis using the same network
and the same initial states as in part II, but this time make use of
simulated annealing.  First you will have to experiment with annealing
schedules to find one that seems to work reasonably well.

Tools:  Use the cs program in the PDP software package.
Make the istr parameter very large so that the sigmoid function
approximates threshold function, i.e. your network executes like
the Hopfield network. For part III of the assignment, run cs in the
boltzmann mode.

Documentation: Write a one or two page report on the experiments
you did. List the weight assignment that you used and explain how you
came up with it. List the results of the statistical analysis in parts
II and III. Did the simulated annealing version do better than the
Hopfield network version? Why or why not? What could you do to improve
the approach?


Old PDP files:
set state activation 0 1.0 
set state activation 1 0.0 
set state activation 2 0.0 
set state activation 3 0.0 
set state activation 4 1.0 
set state activation 5 0.0 
set state activation 6 1.0 
set state activation 7 0.0 
set state activation 8 0.0 
set state activation 9 1.0 
set state activation 10 0.0 
set state activation 11 1.0 
set state activation 12 1.0 
set state activation 13 0.0 
set state activation 14 1.0 
set state activation 15 1.0 
set state activation 16 1.0 
set state activation 17 0.0 

6 3
 0  1  2 
 3  4  5
 6  7  8
 9 10 11
12 13 14
15 16 17

nunits 18
ninputs 18
nupdates 18

c -1.0
n -1.0
s  0.0
z  0.1
b  0.0



set dlevel 3
get network map.net
set param estr .4
set param istr 10.0
set ncyc 20
get anneal 1.0 30 0.05 50 0.01 end
get unames Ar Ag Ab 
           Br Bg Bb 
           Cr Cg Cb 
           Dr Dg Db 
           Er Eg Eb 
           Fr Fg Fb 

define: layout

           Red         Green          Blue     
        in inp act   in inp act   in inp act  
     A $   $   $ 

               goodness  $        temperature $

cycleno    label        0       6       70      h          5
cycleno    variable     1       6       75      cycleno    4    1
extinput   look         2       $       0       extinput   3 100. 13 map.loo
intinput   look         2       $       1       intinput   3 100. 13 map.loo
activation look         1       $       2       activation 3 100. 13 map.loo
goodnes    floatvar     1       $       3       goodness   7 1.0
temperature floatvar    1       $       4       temperature 7 1.0


What is the basic way to set the initial weights so that they mimic the
constraints in map.net?  Also, how do I set the initial state activations?
And, lastly, I'm only interested in the goodness statistic at this point
(no need to confuse everyone with the variety of other statistics that
could be gathered).

Thanks for any input,

Marshall R. Mayberry


In terms of extensibility, PDP++ promises automatic user-interface
generation if your new objects (specialized neurons, layers) are
written with the PDP++ API.
As far as the scripting language, my initial impression is that the
through-and-through use of object-orientation means that everything
can be hierarchically organized when means that things can be loaded
and unloaded in bits and pieces.
@CP style { symbol } {

{ float_RArray* ths = .projects[0].environments[0].events[0].patterns[0]->value;
  ths = "{1,0,0,0,}";
{ float_RArray* ths = .projects[0].environments[0].events[0].patterns[1]->value;
  ths = "{1,0,0,0,}";
{ float_RArray* ths = .projects[0].environments[0].events[1].patterns[0]->value;
  ths = "{0,1,0,0,}";
{ float_RArray* ths = .projects[0].environments[0].events[3].patterns[0]->value;
  ths = "{0,0,0,1,}";
{ float_RArray* ths = .projects[0].environments[0].events[3].patterns[1]->value;
  ths = "{0,0,0,1,}";
.projects[0].processes->New(1, TrainProcess);
{ taNBase* ths = .projects[0].processes[0];
  ths->name = "Train";


Now I would feel very comfortable writing script code like that above
--- no new syntax to learn. Plus I am using the same language that the
objects were created in. However, the stub information generated by
parsing C++ header files is cryptic. Once something fails in the
script language, you have this one component of your simulation that
is difficult to fathom.
Further CSS has a serious weakness: One consequence of the lack of
name resolution is that only default (no argument) constructors can be

@End @Section

@Section @Title { Tangents }
Notebook widget (as shown in TiX) is a great thing to add to NSL.
I don't see the need for vectors --- they are nothing but matrices
with just one row or one column.
I assume everything that is in NSL now will stay there. That is why I
am not mentioning things that PDP++ has that NSL has.
I don't think we should change the design of NSL very much. Just
hollow out the inflexibility in representation of the MATRIX class so
that it could @I process (use,print,dump to file) a matrix from
any other software --- Matlab, Rlab, Khoros, etc.

@End @Section

reply via email to

[Prev in Thread] Current Thread [Next in Thread]