Loading…

Domain-Collaborative Contrastive Learning for Hyperspectral Image Classification

Variations in atmosphere, lighting, and imaging systems result in diverse category distributions in hyperspectral imagery, impacting the accuracy of cross-domain hyperspectral image classification (HSIC). Unsupervised domain adaptation (UDA) aims to address this issue by learning a model that genera...

Full description

Saved in:
Bibliographic Details
Published in:IEEE geoscience and remote sensing letters 2024, Vol.21, p.1-5
Main Authors: Luo, Haiyang, Qiao, Xueyi, Xu, Yongming, Zhong, Shengwei, Gong, Chen
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Variations in atmosphere, lighting, and imaging systems result in diverse category distributions in hyperspectral imagery, impacting the accuracy of cross-domain hyperspectral image classification (HSIC). Unsupervised domain adaptation (UDA) aims to address this issue by learning a model that generalizes effectively across domains, leveraging labels only from source domain (SD). Most existing UDA methods focus on aligning distributions between domains without fully considering the valuable information within individual domains. To fill this gap, this letter proposes a domain-collaborative contrastive learning (DCCL) method. DCCL integrates a novel pseudo-labeling strategy with a cross-domain contrastive learning (CL) framework. Specifically, in the pseudo-labeling phase, the confident examples in target domain (TD) are collaboratively labeled according to the labeled examples in SD and the class centers in TD. Then, the CL phase simultaneously minimizes in-domain and cross-domain contrastive loss to promote the aggregation of examples from the same category in both domains. Experimental results demonstrate that the DCCL achieves the accuracy rates of 93.47% and 54.59% on Pavia and Indiana datasets, respectively, surpassing the performance of other state-of-the-art UDA methods. Our source code is available at https://github.com/Leap-luohaiyang/DCCL-2024 .
ISSN:1545-598X
1558-0571
DOI:10.1109/LGRS.2024.3425482