Loading…
Unsupervised domain adaptation via distilled discriminative clustering
•We propose to solve the unsupervised domain adaptation problem by distilled discriminative clustering.•Motivated by the essential assumption for domain adaptability, we propose to reformulate the domain adaptation problem as discriminative clustering of target data, given strong privileged informat...
Saved in:
Published in: | Pattern recognition 2022-07, Vol.127, p.108638, Article 108638 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | •We propose to solve the unsupervised domain adaptation problem by distilled discriminative clustering.•Motivated by the essential assumption for domain adaptability, we propose to reformulate the domain adaptation problem as discriminative clustering of target data, given strong privileged information from the semantically related, labeled source data. By properly distilling discriminative source information for clustering of the target data, we aim to learn classification of target data directly, with no explicit feature alignment.•We present clustering objectives based on a robust variant of entropy minimization for reliable cluster separation, a soft Fisher-like criterion for inter-cluster isolation and intra-cluster purity and compactness, and the centroid classification for consistent cluster ordering across domains. To distill discriminative source information for target clustering, we use parallel, supervised learning objectives on the labeled source data.•We also give geometric intuition that illustrates how constituent objectives of our method help learn class-wisely pure, compact feature distributions.•Experiments on five benchmarks show that our method achieves the new state of the art.
Unsupervised domain adaptation addresses the problem of classifying data in an unlabeled target domain, given labeled source domain data that share a common label space but follow a different distribution. Most of the recent methods take the approach of explicitly aligning feature distributions between the two domains. Differently, motivated by the fundamental assumption for domain adaptability, we re-cast the domain adaptation problem as discriminative clustering of target data, given strong privileged information provided by the closely related, labeled source data. Technically, we use clustering objectives based on a robust variant of entropy minimization that adaptively filters target data, a soft Fisher-like criterion, and additionally the cluster ordering via centroid classification. To distill discriminative source information for target clustering, we propose to jointly train the network using parallel, supervised learning objectives over labeled source data. We term our method of distilled discriminative clustering for domain adaptation as DisClusterDA. We also give geometric intuition that illustrates how constituent objectives of DisClusterDA help learn class-wisely pure, compact feature distributions. We conduct careful ablation studies and extensive |
---|---|
ISSN: | 0031-3203 1873-5142 |
DOI: | 10.1016/j.patcog.2022.108638 |