|
With a
relatively small number of hidden units, an encoder network is forced to
achieve a relatively compact, and thus abstract, representation of the
inputs. Inputs are encoded onto this abstract hidden-unit representation
using input-side weights. The hidden unit representation is then decoded onto
the output units using output-side weights. Because the discrepancy between
input and output activations constitutes network error, there is a sense in
which encoder networks do not require any external feedback other than the
training inputs.
|