Loading…
Supporting generalization in non-human primate behavior by tapping into structural knowledge: Examples from sensorimotor mappings, inference, and decision-making
•Ultimately we wish to understand how brains operate within a naturalistic, closed action-perception loop.•Reinforcement Learning and Control Theory naturally wade across traditional subfields of neuroscience.•We develop an experimental ecosystem inspired by the frameworks above, and tapping into st...
Saved in:
Published in: | Progress in neurobiology 2021-06, Vol.201, p.101996-101996, Article 101996 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | •Ultimately we wish to understand how brains operate within a naturalistic, closed action-perception loop.•Reinforcement Learning and Control Theory naturally wade across traditional subfields of neuroscience.•We develop an experimental ecosystem inspired by the frameworks above, and tapping into structural knowledge.•Macaques naturally generalize to novel sensorimotor mappings, cases of inferences, and multi-option decision-making.
The complex behaviors we ultimately wish to understand are far from those currently used in systems neuroscience laboratories. A salient difference are the closed loops between action and perception prominently present in natural but not laboratory behaviors. The framework of reinforcement learning and control naturally wades across action and perception, and thus is poised to inform the neurosciences of tomorrow, not only from a data analyses and modeling framework, but also in guiding experimental design. We argue that this theoretical framework emphasizes active sensing, dynamical planning, and the leveraging of structural regularities as key operations for intelligent behavior within uncertain, time-varying environments. Similarly, we argue that we may study natural task strategies and their neural circuits without over-training animals when the tasks we use tap into our animal’s structural knowledge. As proof-of-principle, we teach animals to navigate through a virtual environment – i.e., explore a well-defined and repetitive structure governed by the laws of physics - using a joystick. Once these animals have learned to ‘drive’, without further training they naturally (i) show zero- or one-shot learning of novel sensorimotor contingencies, (ii) infer the evolving path of dynamically changing latent variables, and (iii) make decisions consistent with maximizing reward rate. Such task designs allow for the study of flexible and generalizable, yet controlled, behaviors. In turn, they allow for the exploitation of pillars of intelligence – flexibility, prediction, and generalization –, properties whose neural underpinning have remained elusive. |
---|---|
ISSN: | 0301-0082 1873-5118 |
DOI: | 10.1016/j.pneurobio.2021.101996 |