Loading…

Improving Generalization Capabilities of Dynamic Neural Networks

This work addresses the problem of improving the generalization capabilities of continuous recurrent neural networks. The learning task is transformed into an optimal control framework in which the weights and the initial network state are treated as unknown controls. A new learning algorithm based...

Full description

Saved in:
Bibliographic Details
Published in:Neural computation 2004-06, Vol.16 (6), p.1253-1282
Main Authors: Galicki, Miroslaw, Leistritz, Lutz, Zwick, Ernst Bernhard, Witte, Herbert
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This work addresses the problem of improving the generalization capabilities of continuous recurrent neural networks. The learning task is transformed into an optimal control framework in which the weights and the initial network state are treated as unknown controls. A new learning algorithm based on a variational formulation of Pontrayagin's maximum principle is proposed. Under reasonable assumptions, its convergence is discussed. Numerical examples are given that demonstrate an essential improvement of generalization capabilities after the learning process of a dynamic network.
ISSN:0899-7667
1530-888X
DOI:10.1162/089976604773717603