Loading…

Achieving Better Category Separability for Hyperspectral Image Classification: A Spatial-Spectral Approach

The task of hyperspectral image (HSI) classification has attracted extensive attention. The rich spectral information in HSIs not only provides more detailed information but also brings a lot of redundant information. Redundant information makes spectral curves of different categories have similar t...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transaction on neural networks and learning systems 2024-07, Vol.35 (7), p.9621-9635
Main Authors: Bai, Jing, Shi, Wei, Xiao, Zhu, Ali, Talal Ahmed Ali, Ye, Fawang, Jiao, Licheng
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The task of hyperspectral image (HSI) classification has attracted extensive attention. The rich spectral information in HSIs not only provides more detailed information but also brings a lot of redundant information. Redundant information makes spectral curves of different categories have similar trends, which leads to poor category separability. In this article, we achieve better category separability from the perspective of increasing the difference between categories and reducing the variation within category, thus improving the classification accuracy. Specifically, we propose the template spectrum-based processing module from spectral perspective, which can effectively expose the unique characteristics of different categories and reduce the difficulty of model mining key features. Second, we design an adaptive dual attention network from spatial perspective, where the target pixel can adaptively aggregate high-level features by evaluating the confidence of effective information in different receptive fields. Compared with the single adjacency scheme, the adaptive dual attention mechanism makes the ability of target pixel to combine spatial information to reduce variation more stable. Finally, we designed a dispersion loss from the classifier's perspective. By supervising the learnable parameters of the final classification layer, the loss makes the category standard eigenvectors learned by the model more dispersed, which improves the category separability and reduces the rate of misclassification. Experiments on three common datasets show that our proposed method is superior to the comparison method.
ISSN:2162-237X
2162-2388
2162-2388
DOI:10.1109/TNNLS.2023.3235711