Loading…
The Rosenblatt Bayesian Algorithm Learning in a Nonstationary Environment
In this letter, we study online learning in neural networks (NNs) obtained by approximating Bayesian learning. The approach is applied to Gibbs learning with the Rosenblatt potential in a nonstationary environment. The online scheme is obtained by the minimization (maximization) of the Kullback-Leib...
Saved in:
Published in: | IEEE transaction on neural networks and learning systems 2007-03, Vol.18 (2), p.584-588 |
---|---|
Main Author: | |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In this letter, we study online learning in neural networks (NNs) obtained by approximating Bayesian learning. The approach is applied to Gibbs learning with the Rosenblatt potential in a nonstationary environment. The online scheme is obtained by the minimization (maximization) of the Kullback-Leibler divergence (cross entropy) between the true posterior distribution and the parameterized one. The complexity of the learning algorithm is further decreased by projecting the posterior onto a Gaussian distribution and imposing a spherical covariance matrix. We study in detail the particular case of learning linearly separable rules. In the case of a fixed rule, we observe an asymptotic generalization error e g propalpha -1 for both the spherical and the full covariance matrix approximations. However, in the case of drifting rule, only the full covariance matrix algorithm shows a good performance. This good performance is indeed a surprise since the algorithm is obtained by projecting without the benefit of the extra information on drifting |
---|---|
ISSN: | 1045-9227 2162-237X 1941-0093 2162-2388 |
DOI: | 10.1109/TNN.2006.889943 |