Computational modeling of epiphany learning

Proc Natl Acad Sci U S A. 2017 May 2;114(18):4637-4642. doi: 10.1073/pnas.1618161114. Epub 2017 Apr 17.

Abstract

Models of reinforcement learning (RL) are prevalent in the decision-making literature, but not all behavior seems to conform to the gradual convergence that is a central feature of RL. In some cases learning seems to happen all at once. Limited prior research on these "epiphanies" has shown evidence of sudden changes in behavior, but it remains unclear how such epiphanies occur. We propose a sequential-sampling model of epiphany learning (EL) and test it using an eye-tracking experiment. In the experiment, subjects repeatedly play a strategic game that has an optimal strategy. Subjects can learn over time from feedback but are also allowed to commit to a strategy at any time, eliminating all other options and opportunities to learn. We find that the EL model is consistent with the choices, eye movements, and pupillary responses of subjects who commit to the optimal strategy (correct epiphany) but not always of those who commit to a suboptimal strategy or who do not commit at all. Our findings suggest that EL is driven by a latent evidence accumulation process that can be revealed with eye-tracking data.

Keywords: beauty contest; decision making; epiphany learning; eye tracking; pupil dilation.

Publication types

  • Clinical Trial
  • Research Support, U.S. Gov't, Non-P.H.S.

MeSH terms

  • Adult
  • Computer Simulation*
  • Eye Movements / physiology*
  • Female
  • Humans
  • Learning / physiology*
  • Male
  • Models, Neurological*