Loading…

Evolution and generalization of a single neurone. III. Primitive, regularized, standard, robust and minimax regressions

We show that during training the single layer perceptron, one can obtain six conventional statistical regressions: a primitive, regularized, standard, the standard with the pseudo-inversion of the covariance matrix, robust, and minimax (support vector). The complexity of the regression equation incr...

Full description

Saved in:
Bibliographic Details
Published in:Neural networks 2000-05, Vol.13 (4), p.507-523
Main Author: RAUDYS, S
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We show that during training the single layer perceptron, one can obtain six conventional statistical regressions: a primitive, regularized, standard, the standard with the pseudo-inversion of the covariance matrix, robust, and minimax (support vector). The complexity of the regression equation increases with an increase in the number of iterations. The generalization accuracy depends on the type of the regression obtained during the training, on the data, learning-set size, and, in certain cases, on the distribution of components of the weight vector. For small intrinsic dimensionality of the data and certain distributions of components of the weight vector the single layer perceptron can be trained even with very short learning sequences. The type of the regression obtained in SLP training should be controlled by the sort of cost function as well as by training parameters (the number of iterations, learning step, etc.). Whitening data transformation prior to training the perceptron is a tool to incorporate a prior information into the prediction rule design, and helps both to diminish the generalization error and the training time.
ISSN:0893-6080
1879-2782
DOI:10.1016/S0893-6080(00)00025-3