Takane, Y., Oshima-Takane, Y., &  Shultz, T. R. (December, 1994). Approximations of nonlinear functions by feed-forward networks. Proceedings of the 11th Annual Meeting of the Japan Classification Society (pp. 26-33). Tokyo: Japan Classification Society.

 

Abstract

Neural network (NN) models are very popular in artificial intelligence, pattern recognition, cognitive psychology, etc. Feed-forward networks may be viewed as approximating nonlinear functions that connect inputs to outputs (e.g., Ripley, 1993). They are known to be robust and efficient approximators of nonlinear functions (e.g., Hornik, Stinchcombe, & White, 1989). We analyze how the approximations are done using a variety of multivariate and graphical techniques (Takane, Oshima-Takane, & Shultz, 1994; Oshima-Takane, Shultz, & Takane, forthcoming). The particular network architecture we are interested in is the cascade correlation (CC) learning network (Fahlman & Lebiere, 1990) which is capable of dynamically growing nets to adapt to more complicated problems We look at how the learning and representation of knowledge occur in the CC networks as it performs a variety of tasks. We also examine the generalization capability and the effect of environmental bias in the training.

 

Copyright notice

Abstracts, papers, chapters, and other documents are posted on this site as an efficient way to distribute reprints. The respective authors and publishers of these works retain all of the copyrights to this material. Anyone copying, downloading, bookmarking, or printing any of these materials agrees to comply with all of the copyright terms. Other than having an electronic or printed copy for fair personal use, none of these works may be reposted, reprinted, or redistributed without the explicit permission of the relevant copyright holders.

 

To obtain a PDF reprint of this particular article, signal your agreement with these copyright terms by clicking on the statement below.

 

I agree with all of these copyright terms PDF 807KB