AUTHOR=Metha Jeremy A. , Brian Maddison L. , Oberrauch Sara , Barnes Samuel A. , Featherby Travis J. , Bossaerts Peter , Murawski Carsten , Hoyer Daniel , Jacobson Laura H. TITLE=Separating Probability and Reversal Learning in a Novel Probabilistic Reversal Learning Task for Mice JOURNAL=Frontiers in Behavioral Neuroscience VOLUME=13 YEAR=2020 URL=https://www.frontiersin.org/journals/behavioral-neuroscience/articles/10.3389/fnbeh.2019.00270 DOI=10.3389/fnbeh.2019.00270 ISSN=1662-5153 ABSTRACT=
The exploration/exploitation tradeoff – pursuing a known reward vs. sampling from lesser known options in the hope of finding a better payoff – is a fundamental aspect of learning and decision making. In humans, this has been studied using multi-armed bandit tasks. The same processes have also been studied using simplified probabilistic reversal learning (PRL) tasks with binary choices. Our investigations suggest that protocols previously used to explore PRL in mice may prove beyond their cognitive capacities, with animals performing at a no-better-than-chance level. We sought a novel probabilistic learning task to improve behavioral responding in mice, whilst allowing the investigation of the exploration/exploitation tradeoff in decision making. To achieve this, we developed a two-lever operant chamber task with levers corresponding to different probabilities (high/low) of receiving a saccharin reward, reversing the reward contingencies associated with levers once animals reached a threshold of 80% responding at the high rewarding lever. We found that, unlike in existing PRL tasks, mice are able to learn and behave near optimally with 80% high/20% low reward probabilities. Altering the reward contingencies towards equality showed that some mice displayed preference for the high rewarding lever with probabilities as close as 60% high/40% low. Additionally, we show that animal choice behavior can be effectively modelled using reinforcement learning (RL) models incorporating learning rates for positive and negative prediction error, a perseveration parameter, and a noise parameter. This new decision task, coupled with RL analyses, advances access to investigate the neuroscience of the exploration/exploitation tradeoff in decision making.