Loading…
Proving the efficacy of complementary inputs for multilayer neural networks
This paper proposes and discusses a backpropagation-based training approach for multilayer networks that counteracts the tendency that typical backpropagation-based training algorithms have to "favor" examples that have large input feature values. This problem can occur in any real valued...
Saved in:
Main Author: | |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This paper proposes and discusses a backpropagation-based training approach for multilayer networks that counteracts the tendency that typical backpropagation-based training algorithms have to "favor" examples that have large input feature values. This problem can occur in any real valued input space, and can create a surprising degree of skew in the learned decision surface even with relatively simple training sets. The proposed method involves modifying the original input feature vectors in the training set by appending complementary inputs, which essentially doubles the number of inputs to the network. This paper proves that this modification does not increase the network complexity, by showing that it is possible to map the network with complimentary inputs back into the original feature space. |
---|---|
ISSN: | 2161-4393 2161-4407 |
DOI: | 10.1109/IJCNN.2011.6033480 |