Loading…
Semantic-Aware Adaptive Prompt Learning for Universal Multi-Source Domain Adaptation
Universal multi-source domain adaptation (UniMDA) aims to transfer the knowledge from multiple labeled source domains to an unlabeled target domain without constraints on the label space. Due to its inherent domain shift (different data distributions) and class shift (unknown target classes), UniMDA...
Saved in:
Published in: | IEEE signal processing letters 2024, Vol.31, p.1444-1448 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Universal multi-source domain adaptation (UniMDA) aims to transfer the knowledge from multiple labeled source domains to an unlabeled target domain without constraints on the label space. Due to its inherent domain shift (different data distributions) and class shift (unknown target classes), UniMDA stands as an extremely challenging task. However, existing solutions mainly focus on excavating image features to detect unknown samples, ignoring the abundant information contained in the textual semantics. In this letter, we propose a Semantic-aware Adaptive Prompt Learning method based on Contrastive Language Image Pretraining (SAP-CLIP) for UniMDA classification tasks. Concretely, we utilize the CLIP with learnable prompts to leverage textual information of both class semantics and domain representations, thus helping the model detect unknown samples and tackle domain shifts. Besides, we propose a novel margin loss with a dynamic scoring function to enlarge the margin distance between known and unknown sample sets, facilitating a more precise classification. Experiment results on three benchmarks confirm the state-of-the-art performance of our method. |
---|---|
ISSN: | 1070-9908 1558-2361 |
DOI: | 10.1109/LSP.2024.3389508 |