Loading…
Subdomain adaptation via correlation alignment with entropy minimization for unsupervised domain adaptation
Unsupervised domain adaptation (UDA) is a well-explored domain in transfer learning, finding applications across various real-world scenarios. The central challenge in UDA lies in addressing the domain shift between training (source) and testing (target) data distributions. This study focuses on ima...
Saved in:
Published in: | Pattern analysis and applications : PAA 2024-03, Vol.27 (1), Article 13 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Unsupervised domain adaptation (UDA) is a well-explored domain in transfer learning, finding applications across various real-world scenarios. The central challenge in UDA lies in addressing the domain shift between training (source) and testing (target) data distributions. This study focuses on image classification tasks within UDA, where label spaces are shared, but the target domain lacks labeled samples. Our primary objective revolves around mitigating the domain discrepancies between the source and target domains, ultimately facilitating robust generalization in the target domains. Domain adaptation techniques have traditionally concentrated on the global feature distribution to minimize disparities. However, these methods often need to pay more attention to crucial, domain-specific subdomain information within identical classification categories, challenging achieving the desired performance without fine-grained data. To tackle these challenges, we propose a unified framework, Subdomain Adaptation via Correlation Alignment with Entropy Minimization, for unsupervised domain adaptation. Our approach incorporates three advanced techniques: (1) Local Maximum Mean Discrepancy, which aligns the means of local feature subsets, capturing intrinsic subdomain alignments often missed by global alignment, (2) correlation alignment aimed at minimizing the correlation between domain distributions, and (3) entropy regularization applied to target domains to encourage low-density separation between categories. We validate our proposed methods through rigorous experimental evaluations and ablation studies on standard benchmark datasets. The results consistently demonstrate the superior performance of our approaches compared to existing state-of-the-art domain adaptation methods. |
---|---|
ISSN: | 1433-7541 1433-755X |
DOI: | 10.1007/s10044-024-01232-9 |