Loading…

Leveraging Implicit Relative Labeling-Importance Information for Effective Multi-Label Learning

Multi-label learning deals with training examples each represented by a single instance while associated with multiple class labels, and the task is to train a predictive model which can assign a set of proper labels for the unseen instance. Existing approaches employ the common assumption of equal...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on knowledge and data engineering 2021-05, Vol.33 (5), p.2057-2070
Main Authors: Zhang, Min-Ling, Zhang, Qian-Wen, Fang, Jun-Peng, Li, Yu-Kun, Geng, Xin
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Multi-label learning deals with training examples each represented by a single instance while associated with multiple class labels, and the task is to train a predictive model which can assign a set of proper labels for the unseen instance. Existing approaches employ the common assumption of equal labeling-importance, i.e., all associated labels are regarded to be relevant to the training instance while their relative importance in characterizing its semantics are not differentiated. Nonetheless, this common assumption does not reflect the fact that the importance degree of each relevant label is generally different, though the importance information is not directly accessible from the training examples. In this article, we show that it is beneficial to leverage the implicit relative labeling-importance (RLI) information to help induce multi-label predictive model with strong generalization performance. Specifically, RLI degrees are formalized as multinomial distribution over the label space, which can be estimated by either global label propagation procedure or local k k -nearest neighbor reconstruction. Correspondingly, the multi-label predictive model is induced by fitting modeling outputs with estimated RLI degrees along with multi-label empirical loss regularization. Extensive experiments clearly validate that leveraging implicit RLI information serves as a favorable strategy to achieve effective multi-label learning.
ISSN:1041-4347
1558-2191
DOI:10.1109/TKDE.2019.2951561