With the exception of the lack of direct input-output connections, training proceeds as in normal cascade-correlation. When error reduction stagnates, a hidden unit is recruited. As the first hidden unit is added, its input weights are frozen (shown in solid arrows), and training of the output weights resumes. The network is growing as it learns.