Loading…

Cluster Alignment With Target Knowledge Mining for Unsupervised Domain Adaptation Semantic Segmentation

Unsupervised domain adaptation (UDA) carries out knowledge transfer from the labeled source domain to the unlabeled target domain. Existing feature alignment methods in UDA semantic segmentation achieve this goal by aligning the feature distribution between domains. However, these feature alignment...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on image processing 2022, Vol.31, p.7403-7418
Main Authors: Wang, Shuang, Zhao, Dong, Zhang, Chi, Guo, Yuwei, Zang, Qi, Gu, Yu, Li, Yi, Jiao, Licheng
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Unsupervised domain adaptation (UDA) carries out knowledge transfer from the labeled source domain to the unlabeled target domain. Existing feature alignment methods in UDA semantic segmentation achieve this goal by aligning the feature distribution between domains. However, these feature alignment methods ignore the domain-specific knowledge of the target domain. In consequence, 1) the correlation among pixels of the target domain is not explored; and 2) the classifier is not explicitly designed for the target domain distribution. To conquer these obstacles, we propose a novel cluster alignment framework, which mines the domain-specific knowledge when performing the alignment. Specifically, we design a multi-prototype clustering strategy to make the pixel features within the same class tightly distributed for the target domain. Subsequently, a contrastive strategy is developed to align the distributions between domains, with the clustered structure maintained. After that, a novel affinity-based normalized cut loss is devised to learn task-specific decision boundaries. Our method enhances the model's adaptability in the target domain, and can be used as a pre-adaptation for self-training to boost its performance. Sufficient experiments prove the effectiveness of our method against existing state-of-the-art methods on representative UDA benchmarks.
ISSN:1057-7149
1941-0042
DOI:10.1109/TIP.2022.3222634