Loading…
An incremental non-iterative learning method for one-layer feedforward neural networks
[Display omitted] •A non-iterative and incremental learning method for neural networks is proposed.•It is a hyperparameter-free learning method.•Weights are obtained by linear equations and Singular Value Decomposition.•The method’s accuracy is comparable to some other state-of-the-art approaches.•I...
Saved in:
Published in: | Applied soft computing 2018-09, Vol.70, p.951-958 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | [Display omitted]
•A non-iterative and incremental learning method for neural networks is proposed.•It is a hyperparameter-free learning method.•Weights are obtained by linear equations and Singular Value Decomposition.•The method’s accuracy is comparable to some other state-of-the-art approaches.•Its efficiency is better making the method proper to deal with large-scale problems.
In machine learning literature, and especially in the literature referring to artificial neural networks, most methods are iterative and operate in batch mode. However, many of the standard algorithms are not suitable for efficiently managing the emerging large-scale data sets obtained from new real-world applications. Novel proposals to address these challenges are mainly iterative approaches based on incremental or distributed learning algorithms. However, the state-of-the-art is such that there are few learning methods based on non-iterative approaches, which have certain advantages over iterative models in dealing more efficiently with these new challenges. We have developed a non-iterative, incremental and hyperparameter-free learning method for one-layer feedforward neural networks without hidden layers. This method efficiently obtains the optimal parameters of the network, regardless of whether the data contains a greater number of samples than variables or vice versa. It does this by using a square loss function that measures errors before the output activation functions and scales them by the slope of these functions at each data point. The outcome is a system of linear equations that obtain the network's weights and that is further transformed using Singular Value Decomposition. We analyze the behavior of the algorithm, comparing its performance and scaling properties to other state-of-the-art approaches. Experimental results demonstrate that the proposed method appropriately solves a wide range of classification problems and is able to efficiently deal with large-scale tasks. |
---|---|
ISSN: | 1568-4946 1872-9681 |
DOI: | 10.1016/j.asoc.2017.07.061 |