Loading…
Generalized Ambiguity Decompositions for Classification with Applications in Active Learning and Unsupervised Ensemble Pruning
Error decomposition analysis is a key problem for ensemble learning. Two commonly used error decomposition schemes, the classic Ambiguity Decomposition and Bias-Variance-Covariance decomposition, are only suitable for regression tasks with square loss. We generalized the classic Ambiguity Decomposit...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Citations: | Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Error decomposition analysis is a key problem for ensemble learning. Two commonly used error decomposition schemes, the classic Ambiguity Decomposition and Bias-Variance-Covariance decomposition, are only suitable for regression tasks with square loss. We generalized the classic Ambiguity Decomposition from regression problems with square loss to classification problems with any loss functions that are twice differentiable, including the logistic loss in Logistic Regression, the exponential loss in Boosting methods, and the 0-1 loss in many other classification tasks. We further proved several important properties of the Ambiguity term, armed with which the Ambiguity terms of logistic loss, exponential loss and 0-1 loss can be explicitly computed and optimized. We further discussed the relationship between margin theory, "good'' and "bad'' diversity theory and our theoretical results, and provided some new insights for ensemble learning. We demonstrated the applications of our theoretical results in active learning and unsupervised ensemble pruning, and the experimental results confirmed the effectiveness of our methods. |
---|---|
ISSN: | 2159-5399 2374-3468 |
DOI: | 10.1609/aaai.v31i1.10834 |