Takane, Y., Oshima-Takane, Y., & Shultz, T. R. (1999). Analysis of knowledge representations in cascade correlation networks. Behaviormetrika, 26, 5-28.

 

Abstract

Feed-forward neural network models approximate nonlinear functions connecting inputs to outputs. The cascade correlation (CC) learning algorithm allows networks to grow dynamically starting from the simplest network topology to solve increasingly more difficult problems. It has been demonstrated that the CC network can solve a wide range of problems including those for which other kinds of networks (e.g., back-propagation networks) have been found to fail. In this paper we show the mechanism and characteristics of nonlinear function learning and representations in CC networks, their generalization capabilities, the effects of environmental bias, etc., using a variety of knowledge representation analysis tools.

 

Copyright notice

Abstracts, papers, chapters, and other documents are posted on this site as an efficient way to distribute reprints. The respective authors and publishers of these works retain all of the copyrights to this material. Anyone copying, downloading, bookmarking, or printing any of these materials agrees to comply with all of the copyright terms. Other than having an electronic or printed copy for fair personal use, none of these works may be reposted, reprinted, or redistributed without the explicit permission of the relevant copyright holders.

 

To obtain a PDF reprint of this particular article, signal your agreement with these copyright terms by clicking on the statement below.

 

I agree with all of these copyright terms PDF 2169KB