Loading…
Improved GNN Models for Constant Matrix Inversion
It was shown that the multiplication of the left hand side of the classical Zhang neural network design rule by an appropriate positive definite matrix generates a new neural design with improved convergence rate. Our intention is to apply similar principle on the standard gradient neural network (G...
Saved in:
Published in: | Neural processing letters 2019-08, Vol.50 (1), p.321-339 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | It was shown that the multiplication of the left hand side of the classical Zhang neural network design rule by an appropriate positive definite matrix generates a new neural design with improved convergence rate. Our intention is to apply similar principle on the standard gradient neural network (GNN) model. To that goal, we discover that some of proposed models can be considered as the multiplication of the right hand side of the GNN model by a symmetric positive-semidefinite matrix. As a final result, we propose appropriate general pattern to define various improvements of the standard GNN design for online real-time matrix inversion in time invariant case. The leading idea in generating improved models is initiated after a combination of two GNN patterns. Improved GNN (IGNN) design shows global exponential convergence with an improved convergence rate with respect to the convergence rate of the original GNN pattern. The acceleration in the convergence rate is defined by the smallest eigenvalue of appropriate positive semidefinite matrices. IGNN models are not only generalizations of original GNN models but they comprise so far defined improvements of the standard GNN design. |
---|---|
ISSN: | 1370-4621 1573-773X |
DOI: | 10.1007/s11063-019-10025-9 |