Loading…
Emergence of complex computational structures from chaotic neural networks through reward-modulated Hebbian learning
This paper addresses the question how generic microcircuits of neurons in different parts of the cortex can attain and maintain different computational specializations. We show that if stochastic variations in the dynamics of local microcircuits are correlated with signals related to functional impr...
Saved in:
Published in: | Cerebral cortex (New York, N.Y. 1991) N.Y. 1991), 2014-03, Vol.24 (3), p.677-690 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This paper addresses the question how generic microcircuits of neurons in different parts of the cortex can attain and maintain different computational specializations. We show that if stochastic variations in the dynamics of local microcircuits are correlated with signals related to functional improvements of the brain (e.g. in the control of behavior), the computational operation of these microcircuits can become optimized for specific tasks such as the generation of specific periodic signals and task-dependent routing of information. Furthermore, we show that working memory can autonomously emerge through reward-modulated Hebbian learning, if needed for specific tasks. Altogether, our results suggest that reward-modulated synaptic plasticity can not only optimize the network parameters for specific computational tasks, but also initiate a functional rewiring that re-programs microcircuits, thereby generating diverse computational functions in different generic cortical microcircuits. On a more general level, this work provides a new perspective for a standard model for computations in generic cortical microcircuits (liquid computing model). It shows that the arguably most problematic assumption of this model, the postulate of a teacher that trains neural readouts through supervised learning, can be eliminated. We show that generic networks of neurons can learn numerous biologically relevant computations through trial and error. |
---|---|
ISSN: | 1047-3211 1460-2199 |
DOI: | 10.1093/cercor/bhs348 |