Class Details

Parallel Distributed Processing

Spring 2017


The goal of the course is to introduce neural network models as tools for understanding how cognitive functioning might arise from neural processes, and to provide hands-on knowledge for developing, running, and analyzing such models. The course involves a mix of lectures, computer-based labs, readings, assignments, and class presentations. Assignments will generally require you to report the results of simulations you have carried out, to analyze these results, and to think critically about some issues raised in the readings. There will also be a final project in which you develop a model relevant to your research interests, conduct simulations with the model, and report on the results.

The course is divided into three sections: Basics (in which we will learn the basics of processing and learning in neural networks), Flavors (in which we will learn about the various different kinds of network models) and Applications (in which we will study some influential applications of modeling to cognitive phenomena). Classes will alternate between lecture (Tuesdays) and lab time (Thursdays). Lab time for the first 7 weeks will be spent on specific assignments that I will hand out in class. During labs it will be important to bring your laptop to class, running the requisite software (see below). I will provide some general hands-on instruction relevant to the course assignments each week. You will complete the assignments on your own time, submitting them to me via email by the due dates noted on the syllabus.

At the end of week 6 you will submit a one-page proposal outlining the final project you intend to carry out. This will be returned with feedback, at which point you should get started on your project. You should be on the lookout throughout the earlier sections of the course for topics or issues that you find particularly interesting and would like to pursue in more detail in a project. Feel free to discuss your ideas with me ahead of time. In the last 5 lab sessions, students will present a 20-minute progress report describing the motivation for their project, the approach they are taking, and their progress thus far. A 10-15 page paper based on the final project is due at the end of the semester; there is no final exam for the course.

In general, there are assigned readings for each lecture that are intended to prepare you to participate in the class discussion for that day. I will also post optional background readings on the course webpage which will serve either as the basis for the lecture, to present an alternative point of view, or simply to make available to you relevant material that we won’t have time to cover in class. Optional readings are also a good source of ideas for projects.


Homework assignments: 15% each for a total of 45%
Project proposal: 5%
Project presentation: 10%
Class participation: 10%
Final project: 30%


There is no required text for the course. All assigned and optional readings are available as downloadable pdf files from links on the web page, as will other course materials (e.g., software downloads, handouts, assignments, additional readings, etc.).

The following texts contain some of the course readings and may be useful as general references:

  • PDP1: Rumelhart, D. E., McClelland, J. L., & the PDP research group (1986). Parallel distributed processing: Explorations in the microstructure of cognition. Volume 1: Foundations. Cambridge, MA: MIT Press.
  • PDP2: McClelland, J. L., Rumelhart, D. E., & the PDP research group (1986). Parallel distributed processing: Explorations in the microstructure of cognition. Volume 2: Psychological and biological models. Cambridge, MA: MIT Press.
  • PDP Handbook: McClelland, J. L. and Rumelhart, D. E. (1988). Explorations in parallel distributed processing: A handbook of models, programs, and exercises. Cambridge, MA: MIT Press.
  • MPR: McLeod, P., Plunkett, K. and Rolls, E. T. (1998). Introduction to Connectionist Modelling of Cognitive Processes. Oxford: Oxford University Press.


We will be using a software package called “LENS” (for Light Efficient Network Simulator), developed by Doug Rohde, formerly a grad student at Carnegie Mellon University. Lens runs under both Windows and Unix-based systems (including Linux and Mac OS X). The main website for Lens is

If you’re running one of the following operating systems, you can download a file containing a precompiled version of Lens.

  • Windows
    Download the file and use WinZip or a similar program to unzip the file. This will create a directory called “Lens”. Read the file “README.rtf” in this directory for further instructions.
  • Linux
    Download the file Lens-linux.tar.gz. Open a terminal and untar it with the command “tar xzf Lens-linux.tar.gz”. This will create a directory called “Lens”. Read the text file README in this directory for further instructions.
  • Mac OS X
    Download the file Lens-OSX.tar.gz. Open a terminal and untar it with the command “tar xzf Lens-OSX.tar.gz”. This will create a directory called “Lens”. Read the text file README in this directory for further instructions.

If you’re not using one of these platforms, or if for some reason the precompiled version for your platform doesn’t work (let me know if this happens), you’ll need to download Lens and compile it yourself. Instructions for how to do this can be found at

After installing Lens, you should look at the online manual at, particular the instructions under “Running Lens” and the Tutorial Network under “Example Networks“. The precompiled versions of Lens come with a offline (local) copy of the manual that can be accessed by pointing your web browser at Manual/index.html in the Lens directory. Those installing Lens from scratch can also download an offline copy of the manual from Just move this into your Lens directory and run ” tar xzf lens-manual.tar.gz”.