In equations
2.1-2.4, y refers to unit output
regardless of whether the unit is sending or receiving, and x refers to net input to a receiving
unit regardless of where that unit resides. This labeling consistency can make
it easier to understand what is happening in the transmission between any two
layers. Alternatively, when considering transmission across several network
layers, it can be clearer to reserve different labels for different layers of
units.
In a number of
places in the book, I use the term generative
to describe cascade-correlation networks that build their own topology as they
learn. This made sense because the learning algorithm generates its own topological structure by recruiting new hidden
units as needed in learning. However, the term generative is now also used to signify neural networks that
generate their own input patterns, usually by having top-down weights that run
from outputs to hiddens to inputs. To avoid this dual
use of the term generative, in more
recent publications I refer to cascade-correlation and related networks as constructive learners.