Loading…
NaCL: noise-robust cross-domain contrastive learning for unsupervised domain adaptation
The Unsupervised Domain Adaptation (UDA) methods aim to enhance feature transferability possibly at the expense of feature discriminability. Recently, contrastive representation learning has been applied to UDA as a promising approach. One way is to combine the mainstream domain adaptation method wi...
Saved in:
Published in: | Machine learning 2023-09, Vol.112 (9), p.3473-3496 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The Unsupervised Domain Adaptation (UDA) methods aim to enhance feature transferability possibly at the expense of feature discriminability. Recently, contrastive representation learning has been applied to UDA as a promising approach. One way is to combine the mainstream domain adaptation method with contrastive self-supervised tasks. The other way uses contrastive learning to align class-conditional distributions according to the semantic structure information of source and target domains. Nevertheless, there are some limitations in two aspects. One is that optimal solutions for the contrastive self-supervised learning and the domain discrepancy minimization may not be consistent. The other is that contrastive learning uses pseudo label information of target domain to align class-conditional distributions, where the pseudo label information contains noise such that false positive and negative pairs would deteriorate the performance of contrastive learning. To address these issues, we propose Noise-robust cross-domain Contrastive Learning (NaCL) to directly realize the domain adaptation task via simultaneously learning the instance-wise discrimination and encoding semantic structures in intra- and inter-domain to the learned representation space. More specifically, we adopt topology-based selection on the target domain to detect and remove false positive and negative pairs in contrastive loss. Theoretically, we demonstrate that not only NaCL can be considered an example of Expectation Maximization (EM), but also accurate pseudo label information is beneficial for reducing the expected error on target domain. NaCL obtains superior results on three public benchmarks. Further, NaCL can also be applied to semi-supervised domain adaptation with only minor modifications, achieving advanced diagnostic performance on COVID-19 dataset. Code is available at
https://github.com/jingzhengli/NaCL |
---|---|
ISSN: | 0885-6125 1573-0565 |
DOI: | 10.1007/s10994-023-06343-8 |