Loading…
An incremental learning method for neural networks in adaptive environments
Many real scenarios in machine learning are non-stationary. These challenges forces to develop new algorithms that are able to deal with changes in the underlying problem to be learnt. These changes can be gradual or abrupt. As the dynamics of the changes can be different, the existing machine learn...
Saved in:
Main Authors: | , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Many real scenarios in machine learning are non-stationary. These challenges forces to develop new algorithms that are able to deal with changes in the underlying problem to be learnt. These changes can be gradual or abrupt. As the dynamics of the changes can be different, the existing machine learning algorithms exhibit difficulties to cope with them. In this work we propose a new method, that is based in the introduction of a forgetting function in an incremental online learning algorithm for two-layer feedforward neural networks. This forgetting function gives a monotonically crescent importance to new data. Due to this fact, the network forgets in presence of changes while maintaining a stable behavior when the context is stationary. The theoretical basis for the method is given and its performance is illustrated by evaluating its behavior. The results confirm that the proposed method is able to work in evolving environments. |
---|---|
ISSN: | 2161-4393 2161-4407 |
DOI: | 10.1109/IJCNN.2010.5596335 |