Using the Cascade-correlation algorithm, we wrote a program that produces an output-activation diagram for a network with two continuous input values, a single binary output unit, and random connection weights. Here are several examples of art created by networks with various numbers of hidden units and different color schemes.

Cascade-correlation is an algorithm for learning in neural networks invented by Scott Fahlman of Carnegie Mellon University. In the Laboratory for Natural and Simulated Cognition at McGill University, we have used cascade-correlation to simulate a wide range of problems in cognitive development. Cascade-correlation is a constructive algorithm that creates its own topology of hidden units as it learns. For comparison purposes, our simulator also allows the use of the popular back-propagation algorithm, which is used with static networks that do not change their topology.

To use either algorithm, you only need to paste in your training and possibly test patterns and set a few parameters. Some sample pre-defined problems are also available within the simulator.

You need the JavaT Plug-in 1.3 to view the applet below. This plug-in is compatible with Microsoft Windows XP, Windows 2000, Windows Me, Windows NT, or Windows 98 or 95 with the Internet Explorer or Netscape browser.

If the Java Plug-in is not already installed on your system, your browser should be able to install it automatically. This may take a few minutes. If you have problem with the automatic detection and download of the Java Plug-in 1.3, please go directly to http://java.sun.com/products/plugin/ to download and install this required plug-in.

No Java 2 SDK, Standard Edition v 1.3 support for APPLET!!

## Training Completion Criterion

This implementation of the cascade-correlation and back-propagation algorithms uses a Minkowski infinity distance instead of the usual SSE (Sum of Squared Errors) as a criterion to determine when to stop training. On a n-outputs system, the Minkowski infinity (MINF) between the target value (x1, x2, ... xn) and the computed value (y1, y2, ... yn), is defined as MINF = max{|x1 - y1|, |x2 - y2|, ... |xn - yn|}. When this Minkowski infinity distance is smaller than a pre-determined value (the score threshold parameter) for each of the training patterns, we consider training to be successfully completed.