Loading…
Boosting and Residual Learning Scheme with Pseudoinverse Learners
The traditional gradient descent based optimization algorithms for neural network are subjected too many vulnerabilities, such as slow convergent rate, gradient vanishing and falling into local minima. Therefore, the alternative non-gradient descent learning algorithm was proposed and prevalently ap...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The traditional gradient descent based optimization algorithms for neural network are subjected too many vulnerabilities, such as slow convergent rate, gradient vanishing and falling into local minima. Therefore, the alternative non-gradient descent learning algorithm was proposed and prevalently applied in kinds of domains, such as pseudoinverse learning algorithm (PIL). However, when a special variant of the PIL, taking the random configuration of weight parameters, is adopted, the generalization ability needs further improvement although it has excellent training efficiency. Thus, on consideration of integrating the idea of ensemble learning, we proposes two methods to enhance basic PIL. One method is equivalent to an additive model, which can raise the network's performance by introducing boosting mechanism, and the other is to adopt a recursive way to rectify the hidden layer output of the neural network, then the relative better model is used in the subsequent prediction. Comprehensive evaluating experiments are conducted on several datasets, and the experimental results illustrate that the our proposed methods are effective on the classification accuracy. |
---|---|
ISSN: | 2577-1655 |
DOI: | 10.1109/SMC42975.2020.9283232 |