Loading…
Instructional Mask Autoencoder: A Scalable Learner for Hyperspectral Image Classification
Nowadays, an increasing number of hyperspectral images (HSIs) are becoming available. However, the utilization of unlabeled HSIs is extremely low due to high annotation costs. Thus, it is crucial to figure out how to use these unlabeled HSIs and enhance the classification performance. Fortunately, s...
Saved in:
Published in: | IEEE journal of selected topics in applied earth observations and remote sensing 2024, Vol.17, p.1348-1362 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Nowadays, an increasing number of hyperspectral images (HSIs) are becoming available. However, the utilization of unlabeled HSIs is extremely low due to high annotation costs. Thus, it is crucial to figure out how to use these unlabeled HSIs and enhance the classification performance. Fortunately, self-supervised training enables us to acquire latent features from unlabeled HSIs, thereby enhancing network performance via transfer learning. Whereas, most current networks for HSIs are inflexible, it is challenging for them to perform learning and accommodate multimodal HSIs. Therefore, we devise a scalable self-supervised network called instructional mask autoencoder, which can extract general patterns of HSIs by these unannotated data. It primarily consists of a spatial-spectral embedding block and a transformer-based masked autoencoder, which are utilized for projecting input samples into the same latent space and learning higher level semantic information, respectively. Moreover, we utilize a random token called ins\_{t}oken to instruct the model learn components of global information, which are highly correlated with the target pixel in HSI samples. In the fine-tuning stage, we design a learnable aggregation mechanism to put all tokens into full play. The obtained results illustrate that our method exhibits robust generalization performance and accelerates convergence across diverse datasets. In cases of limited samples, we conducted experiments on three structurally distinct HSIs, all of which achieved competitive performance. Compared to state-of-the-art methods, our approach demonstrated respective improvements of 1.97%, 0.44%, and 3.35% on these three datasets. |
---|---|
ISSN: | 1939-1404 2151-1535 |
DOI: | 10.1109/JSTARS.2023.3337132 |