Encoder networks
can implement a kind of recognition memory, as would be appropriate for
simulation of habituation and familiarization studies. If an encoder network
can learn to encode a stimulus onto a small number of hidden units and then
decode this hidden unit representation onto its output units with very little
error, then it has recognized the stimulus as familiar. |
|
The essential
change from ordinary cascade-correlation (CC) is the elimination of any
direct input-to-output unit connection weights. If such direct connections
are retained, then learning an encoder problem is trivial, requiring only a
weight of 1.0 between each input unit and its corresponding output unit. With
such a trivial solution, a network learns nothing useful that could enable
such phenomena as pattern completion or prototype abstraction. |