My students, colleagues, and I study human cognition and behavior through a combination of psychological and computational approaches. Basic psychological phenomena are simulated, leading to predictions that can be empirically tested. Current lines of research concern evolution, cognitive development, interactions between knowledge and learning, techniques for analyzing knowledge representations in neural networks, and cognitive consistency phenomena in social psychology.
Among the problems in studying evolution are that it consists of a possibly unique and unrepeatable sequence of events, leaving records that are often sparse and open to multiple interpretations. However, computer simulations enable experiments on evolution, with access to complete records. We have begun to explore the evolution of ethnocentrism with agent-based models that allow for reproduction and a variety of learning techniques. Simple computer agents interact with each other in non-zero-sum games (e.g., Prisonerís Dilemma) and these interactions modulate their reproductive fitness. Under certain assumptions and from a neutral start, simulations show that an ethnocentric strategy, involving cooperation within group and defection against agents from other groups, comes to comprise about 75% of a population. Our initial experiments investigated the underlying dynamics of these results, explaining the occurrence of early humanitarian stages (universal cooperation), eventual ethnocentric dominance, and generally poor evolutionary performance of selfish (cooperate with no one) and traitorous (cooperate only with other groups) strategies. Current interests focus on interactions between learning and evolution, and attempts to increase cooperation across groups of innately ethnocentric agents.
One of the major unsolved problems in cognitive development concerns the nature of transition mechanisms. How does the child progress from one stage of reasoning to the next? We have shown that cognitive transitions can be modeled with neural networks that grow as well as learn. We used Fahlman's cascade-correlation algorithm, which recruits new hidden units when network error cannot be further reduced by quantitative adjustment of connection weights. We applied such networks to modeling phenomena such as reasoning about balance scale, seriation, and conservation tasks, shift learning, infant habituation, acquisition of personal pronouns, and integration of velocity, time, and distance information. This modeling work has shed light on a variety of other issues in cognitive development. How is knowledge represented at various stages? What accounts for the particular orders of stages? Why does development take a particular shape? Why do children develop non-normative rules?
Unlike most neural networks, people rarely learn from scratch. Instead, people are likely to retrieve and modify their existing knowledge to deal with new problems. This tendency to rely on existing knowledge in part explains why people sometimes learn complex tasks so rapidly and why their learning is often biased in particular ways. Neural networks are ideal devices for exploring the complex relations between knowledge and learning. To study such issues, we developed a new algorithm, called knowledge-based cascade-correlation that is able to recruit previously learned sub-networks in the service of new learning.
Some of the initial enthusiasm for connectionist modeling waned as researchers discovered that neural network solutions to learning problems were difficult to understand and thus difficult to relate to human solutions. These difficulties resulted, in large part, from a lack of techniques for analyzing the knowledge representations learned by neural networks. We explored a number of techniques for identifying knowledge representations, including graphing approximated functions and various techniques for reducing the dimensionality of network contributions, including principle components analysis, reduced rank approximation, and parafac analysis. Contributions are the products of sending unit activations and output-side connection weights. Such analyses provide many useful insights into how neural networks learn to solve problems. These network representations can then be compared to human knowledge representations.
Several theories in cognitive social psychology emphasize the tendency of people to strive for consistency among their various beliefs and attitudes. We proposed that such strivings for consistency can be understood in terms of constraint satisfaction. Constraint-satisfaction neural networks attempt to satisfy as many constraints as possible as well as possible. We applied such networks to the major paradigms of cognitive dissonance theory and are currently extending them to phenomena in belief perseverance and in cognitive balance theory.