Search publications

Reset filters Search by keyword

No publications found.

 

Disentangling prediction error and value in a formal test of dopamine s role in reinforcement learning

Authors: Usypchuk AAMaes EJPLozzi MAvramidis DKSchoenbaum GEsber GRGardner MPHIordanova MD


Affiliations

1 Department of Psychology, Centre for Studies in Behavioural Neurobiology, Concordia University, Montreal, QC H4B 1R6, Canada.
2 NIDA Intramural Research Program, Baltimore, MD 21224, USA; Departments of Anatomy & Neurobiology and Psychiatry, University of Maryland School of Medicine, Baltimore, MD 21201, USA; Solomon H. Snyder Department of Neuroscience, the Johns Hopkins University, Baltimore, MD 21287, USA.
3 Department of Psychology, Centre for Studies in Behavioural Neurobiology, Concordia University, Montreal, QC H4B 1R6, Canada. Electronic address: mihaela.iordanova@concordia.ca.

Description

The discovery that midbrain dopamine (DA) transients can be mapped onto reward prediction errors (RPEs), the critical signal that drives learning, is a landmark in neuroscience. Causal support for the RPE hypothesis comes from studies showing that stimulating DA neurons can drive learning under conditions where it would not otherwise occur.1,2,3 However, such stimulation might also promote learning by adding reward value and indirectly inducing an RPE. This added value could support new learning even when it is insufficient to support instrumental behavior.4,5 Thus, these competing interpretations are challenging to disentangle and require direct comparison under matched conditions. We developed two computational models grounded in temporal difference reinforcement learning (TDRL)6,7,8 that dissociate the role of DA as an RPE versus a value signal. We validated our models by showing that they both predict learning (unblocking) when ventral tegmental area (VTA) DA stimulation occurs during expected reward delivery in a behavioral blocking design and confirmed this behaviorally. We then contrasted the models by delivering constant optogenetic stimulation during reward across both learning phases of blocking. The value model predicted blocking; the RPE model predicted unblocking. Behavioral results aligned with the latter. Moreover, the RPE model uniquely predicted that constant stimulation would unblock learning at higher frequencies (>20 Hz) when the artificial error alone drives learning. This, too, was confirmed experimentally. We demonstrate a principled computational and empirical dissociation between DA as an RPE versus a value signal. Our results advance understanding of how DA neuron stimulation drives learning.


Keywords: Rescorla-Wagner modelchannelrhodopsinerror correctionmesolimbicoptogeneticsrodentscalar valuetemporal difference reinforcement learningtyrosine hydrohylase


Links

PubMed: https://pubmed.ncbi.nlm.nih.gov/40738112/

DOI: 10.1016/j.cub.2025.06.076