Loading…

An improved weight-constrained neural network training algorithm

In this work, we propose an improved weight-constrained neural network training algorithm, named iWCNN. The proposed algorithm exploits the numerical efficiency of the L-BFGS matrices together with a gradient-projection strategy for handling the bounds on the weights. Additionally, an attractive pro...

Full description

Saved in:
Bibliographic Details
Published in:Neural computing & applications 2020-05, Vol.32 (9), p.4177-4185
Main Authors: Livieris, Ioannis E., Pintelas, Panagiotis
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In this work, we propose an improved weight-constrained neural network training algorithm, named iWCNN. The proposed algorithm exploits the numerical efficiency of the L-BFGS matrices together with a gradient-projection strategy for handling the bounds on the weights. Additionally, an attractive property of iWCNN is that it utilizes a new scaling factor for defining the initial Hessian approximation used in the L-BFGS formula. Since the L-BFGS Hessian approximation is defined utilizing a small number of correction vector pairs, our motivation is to further exploit them in order to increase the efficiency of the training algorithm and the convergence rate of the minimization process. The preliminary numerical experiments provide empirical evidence that the proposed training algorithm accelerates the training process.
ISSN:0941-0643
1433-3058
DOI:10.1007/s00521-019-04342-2