Loading…
Class and subclass probability re-estimation to adapt a classifier in the presence of concept drift
We consider the problem of classification in environments where training and test data may come from different probability distributions. When the fundamental stationary distribution assumption made in supervised learning (and often not satisfied in practice) does not hold, the classifier performanc...
Saved in:
Published in: | Neurocomputing (Amsterdam) 2011-09, Vol.74 (16), p.2614-2623 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | We consider the problem of classification in environments where training and test data may come from different probability distributions. When the fundamental stationary distribution assumption made in supervised learning (and often not satisfied in practice) does not hold, the classifier performance may significantly deteriorate. Several proposals have been made to deal with classification problems where the class priors change after training, but they may fail when the class conditional data densities also change. To cope with this problem, we propose an algorithm that uses unlabeled test data to adapt the classifier outputs to new operating conditions, without re-training it. The algorithm is based on a posterior probability model with two main assumptions: (1) the classes may be decomposed in several (unknown) subclasses, and (2) all changes in data distributions arise from changes in prior subclass probabilities. Experimental results with a neural network model on synthetic and remote sensing practical settings show that the adaptation at the subclass level can get a better adjustment to the new operational conditions than the methods based on class prior changes. |
---|---|
ISSN: | 0925-2312 1872-8286 |
DOI: | 10.1016/j.neucom.2011.03.019 |