Loading…

Comparison of base classifiers for multi-label learning

Multi-label learning methods can be categorised into algorithm adaptation, problem transformation and ensemble methods. Some of these methods depend on a base classifier and the relationship is not well understood. In this paper the sensitivity of five problem transformation and two ensemble methods...

Full description

Saved in:
Bibliographic Details
Published in:Neurocomputing (Amsterdam) 2020-06, Vol.394, p.51-60
Main Authors: Yapp, Edward K. Y., Li, Xiang, Lu, Wen Feng, Tan, Puay Siew
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Multi-label learning methods can be categorised into algorithm adaptation, problem transformation and ensemble methods. Some of these methods depend on a base classifier and the relationship is not well understood. In this paper the sensitivity of five problem transformation and two ensemble methods to four types of classifiers is studied. Their performance across 11 benchmark datasets is measured using 16 evaluation metrics. The best classifier is shown to depend on the method: Support Vector Machines (SVM) for binary relevance, classifier chains, calibrated label ranking, quick weighted multi-label learning and RAndom k-labELsets; k-Nearest Neighbours (k-NN) and Naïve Bayes (NB) for Hierarchy Of Multilabel classifiERs; and Decision Trees (DT) for ensemble of classifier chains. The statistical performance of a classifier is also found to be generally consistent across the metrics for any given method. Overall, DT and SVM have the best performance–computational time trade-off followed by k-NN and NB.
ISSN:0925-2312
1872-8286
DOI:10.1016/j.neucom.2020.01.102