Loading…
An overview and a benchmark of active learning for outlier detection with one-class classifiers
Active learning methods increase classification quality by means of user feedback. An important subcategory is active learning for outlier detection with one-class classifiers. While various methods in this category exist, selecting one for a given application scenario is difficult. This is because...
Saved in:
Published in: | Expert systems with applications 2021-04, Vol.168, p.114372, Article 114372 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Active learning methods increase classification quality by means of user feedback. An important subcategory is active learning for outlier detection with one-class classifiers. While various methods in this category exist, selecting one for a given application scenario is difficult. This is because existing methods rely on different assumptions, have different objectives, and often are tailored to a specific use case. All this calls for a comprehensive comparison, the topic of this article.
This article starts with a categorization of the various methods. Interestingly, many assumptions in the literature are implicit, and their impact has not been discussed so far. Based on this, we propose a novel approach to evaluate active learning results by quantifying how classification results evolve with more user feedback, in a compact and nuanced manner. We run over 84,000 experiments to compare state-of-the-art one-class active learning methods, for a broad variety of scenarios. One key finding is that there is no single active learning method that stands out in a competitive evaluation. Instead, we found that selecting a good query strategy alone is not sufficient, since results hinge significantly on other factors, such as the selection of hyperparameter values. Our results show that some configurations are more robust than others. We conclude by phrasing our findings as guidelines on how to select active learning methods for outlier detection with one-class classifiers.
•Categorization of assumptions and objectives of one-class active learning.•Novel progress curve summaries to facilitate reliable evaluation of active learning.•Large benchmark with 84,000 learning scenarios, classifiers, and query strategies.•Derivation of guidelines to select suitable one-class active learning methods. |
---|---|
ISSN: | 0957-4174 1873-6793 |
DOI: | 10.1016/j.eswa.2020.114372 |