Loading…

Model-based action planning involves cortico-cerebellar and basal ganglia networks

Humans can select actions by learning, planning, or retrieving motor memories. Reinforcement Learning (RL) associates these processes with three major classes of strategies for action selection: exploratory RL learns state-action values by exploration, model-based RL uses internal models to simulate...

Full description

Saved in:
Bibliographic Details
Published in:Scientific reports 2016-08, Vol.6 (1), p.31378-31378, Article 31378
Main Authors: Fermin, Alan S. R., Yoshida, Takehiko, Yoshimoto, Junichiro, Ito, Makoto, Tanaka, Saori C., Doya, Kenji
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Humans can select actions by learning, planning, or retrieving motor memories. Reinforcement Learning (RL) associates these processes with three major classes of strategies for action selection: exploratory RL learns state-action values by exploration, model-based RL uses internal models to simulate future states reached by hypothetical actions and motor-memory RL selects past successful state-action mapping. In order to investigate the neural substrates that implement these strategies, we conducted a functional magnetic resonance imaging (fMRI) experiment while humans performed a sequential action selection task under conditions that promoted the use of a specific RL strategy. The ventromedial prefrontal cortex and ventral striatum increased activity in the exploratory condition; the dorsolateral prefrontal cortex, dorsomedial striatum and lateral cerebellum in the model-based condition; and the supplementary motor area, putamen and anterior cerebellum in the motor-memory condition. These findings suggest that a distinct prefrontal-basal ganglia and cerebellar network implements the model-based RL action selection strategy.
ISSN:2045-2322
2045-2322
DOI:10.1038/srep31378