Loading…
A linear relation between input and first layer in neural networks
Artificial neural networks grow on the number of applications and complexity, which require a minimization on the number of units for some practical implementations. A particular problem is the minimum number of units that a feed forward neural network needs on its first layer. In order to study thi...
Saved in:
Published in: | Annals of mathematics and artificial intelligence 2019-12, Vol.87 (4), p.361-372 |
---|---|
Main Author: | |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Artificial neural networks grow on the number of applications and complexity, which require a minimization on the number of units for some practical implementations. A particular problem is the minimum number of units that a feed forward neural network needs on its first layer. In order to study this problem, it is defined a family of classification problems following a continuity hypothesis, where inputs that are close to some set of points may share the same category. Given a set
S
of
k
−dimensional inputs and let
N
be a feed forward neural network that classifies any input in
S
within a fixed error, there is proved that
N
requires
Θ
k
units in the first layer, if
N
can solve any instance from the given family of classification problems. Furthermore, this asymptotic result is optimal. |
---|---|
ISSN: | 1012-2443 1573-7470 |
DOI: | 10.1007/s10472-019-09657-3 |