Loading…
Insights on the different convergences in Extreme Learning Machine
Neural Networks (NN) are a powerful tool in approximation theory because of the existence of Universal Approximation (UA) results. In the last decades, a significant attention has been given to Extreme Learning Machines (ELMs), typically employed for the training of single layer NNs, and for which a...
Saved in:
Published in: | Neurocomputing (Amsterdam) 2024-09, Vol.599, p.128061, Article 128061 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Neural Networks (NN) are a powerful tool in approximation theory because of the existence of Universal Approximation (UA) results. In the last decades, a significant attention has been given to Extreme Learning Machines (ELMs), typically employed for the training of single layer NNs, and for which a UA result can also be proven. In a generic NN, the design of the optimal approximator can be recast as a non-convex optimization problem that turns out to be particularly demanding from the computational viewpoint. However, under the adoption of ELM, the optimization task reduces to a – possibly rectangular – linear problem. In this work, we detail how to design a sequence of ELM networks trained via a target dataset. Different convergence procedures are proposed and tested for some reference datasets constructed to be equivalent to approximation problems.
•We study the concept of convergence of NN trained via ELM for interpolation.•The concepts of increasing approximation capabilities -new neurons/data- are unified.•The convergence rate is tested on standard test problems of interpolation theory.•The matrix formulation of the training processes in all different cases is detailed. |
---|---|
ISSN: | 0925-2312 1872-8286 |
DOI: | 10.1016/j.neucom.2024.128061 |