Loading…

Dynamical Motor Control Learned with Deep Deterministic Policy Gradient

Conventional models of motor control exploit the spatial representation of the controlled system to generate control commands. Typically, the control command is gained with the feedback state of a specific instant in time, which behaves like an optimal regulator or spatial filter to the feedback sta...

Full description

Saved in:
Bibliographic Details
Published in:Computational intelligence and neuroscience 2018-01, Vol.2018 (2018), p.1-11
Main Authors: Shi, Haibo, Li, Jie, Sun, Yaoru
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Conventional models of motor control exploit the spatial representation of the controlled system to generate control commands. Typically, the control command is gained with the feedback state of a specific instant in time, which behaves like an optimal regulator or spatial filter to the feedback state. Yet, recent neuroscience studies found that the motor network may constitute an autonomous dynamical system and the temporal patterns of the control command can be contained in the dynamics of the motor network, that is, the dynamical system hypothesis (DSH). Inspired by these findings, here we propose a computational model that incorporates this neural mechanism, in which the control command could be unfolded from a dynamical controller whose initial state is specified with the task parameters. The model is trained in a trial-and-error manner in the framework of deep deterministic policy gradient (DDPG). The experimental results show that the dynamical controller successfully learns the control policy for arm reaching movements, while the analysis of the internal activities of the dynamical controller provides the computational evidence to the DSH of the neural coding in motor cortices.
ISSN:1687-5265
1687-5273
DOI:10.1155/2018/8535429