Loading…

Effective backpropagation training with variable stepsize

The issue of variable stepsize in the backpropagation training algorithm has been widely investigated and several techniques employing heuristic factors have been suggested to improve training time and reduce convergence to local minima. In this contribution, backpropagation training is based on a m...

Full description

Saved in:
Bibliographic Details
Published in:Neural networks 1997, Vol.10 (1), p.69-82
Main Authors: MAGOULAS, G. D, VRAHATIS, M. N, ANDROULAKIS, G. S
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The issue of variable stepsize in the backpropagation training algorithm has been widely investigated and several techniques employing heuristic factors have been suggested to improve training time and reduce convergence to local minima. In this contribution, backpropagation training is based on a modified steepest descent method which allows variable stepsize. It is computationally efficient and posseses interesting convergence properties utilizing estimates of the Lipschitz constant without any additional computational cost. The algorithm has been implemented and tested on several problems and the results have been very satisfactory. Numerical evidence shows that the method is robust with good average performance on many classes of problems. Copyright 1996 Elsevier Science Ltd.
ISSN:0893-6080
1879-2782
DOI:10.1016/S0893-6080(96)00052-4