Montrey, M., & Shultz, T. R. (2010). Evolution of social learning strategies. Proceedings of the Ninth IEEE International Conference on Development and Learning (pp. 95-100). Ann Arbor, MI: IEEE.  

 

Abstract

We study three types of learning with Bayesian agent-based modeling. First, we show that previous results obtained from learning chains can be generalized to a more realistic lattice world involving multiple social interactions. Learning based on the passing of posterior probabilities converges to the truth more quickly and reliably than does learning based on imitation and sampling from the environment; and the latter method gets closer to the truth than does pure imitation. The passing of posterior probability distributions can be viewed as teaching by explanation, and as an implementation of the cultural ratchet, which allows rapid progress without backsliding. We also show that evolution selects these learning strategies in proportion to their success. However, if the environment changes very rapidly, evolution favors the imitation-plus-reinforcement strategy over the more sophisticated posterior passing. Implications for developmental robotics, human uniqueness, and interactions between learning and evolution are discussed.

 

Copyright notice

Abstracts, papers, chapters, and other documents are posted on this site as an efficient way to distribute reprints. The respective authors and publishers of these works retain all of the copyrights to this material. Anyone copying, downloading, bookmarking, or printing any of these materials agrees to comply with all of the copyright terms. Other than having an electronic or printed copy for fair personal use, none of these works may be reposted, reprinted, or redistributed without the explicit permission of the relevant copyright holders.

 

To obtain a PDF reprint of this particular article, signal your agreement with these copyright terms by clicking on the statement below.

 

I agree with all of these copyright terms PDF 198KB