Loading…

When Multigranularity Meets Spatial–Spectral Attention: A Hybrid Transformer for Hyperspectral Image Classification

The transformer framework has shown great potential in the field of hyperspectral image (HSI) classification due to its superior global modeling capabilities compared to convolutional neural networks (CNNs). To utilize the transformer to model spatial–spectral information, a hybrid transformer that...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on geoscience and remote sensing 2023, Vol.61, p.1-18
Main Authors: Ouyang, Er, Li, Bin, Hu, Wenjing, Zhang, Guoyun, Zhao, Lin, Wu, Jianhui
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The transformer framework has shown great potential in the field of hyperspectral image (HSI) classification due to its superior global modeling capabilities compared to convolutional neural networks (CNNs). To utilize the transformer to model spatial–spectral information, a hybrid transformer that integrates multigranularity tokens and spatial–spectral attention (SSA) is proposed. Specifically, a token generator is designed to embed the multigranularity semantic tokens, which contributes richer image features to the model by exploiting CNN’s local representation capability. Moreover, a transformer encoder with an SSA mechanism is proposed to capture the global dependencies between different tokens, enabling the model to focus on more differentiated channels and spatial locations to improve the classification accuracy. Ultimately, adaptive weighted fusion is applied to different granularity transformer branches to boost HybridFormer’s classification performance. Experiments were conducted on four new challenging datasets, and the results indicate that HybridFormer achieves state-of-the-art results in terms of classification performance. The code of this work will be available at https://github.com/zhaolin6/HybridFormer for the sake of reproducibility.
ISSN:0196-2892
1558-0644
DOI:10.1109/TGRS.2023.3242978