Loading…
Context-Aware Guided Attention Based Cross-Feedback Dense Network for Hyperspectral Image Super-Resolution
Convolutional neural networks (CNNs) have shown impressive performance in computer vision due to their nonlinearity. Particularly, DenseNet (DN) that facilitates feature reuse in a feedforward (FF) manner has achieved state-of-the-art reconstruction accuracy for super-resolution (SR). However, most...
Saved in:
Published in: | IEEE transactions on geoscience and remote sensing 2022, Vol.60, p.1-14 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Convolutional neural networks (CNNs) have shown impressive performance in computer vision due to their nonlinearity. Particularly, DenseNet (DN) that facilitates feature reuse in a feedforward (FF) manner has achieved state-of-the-art reconstruction accuracy for super-resolution (SR). However, most DN-based SR models transfer the features generated from each layer to all the subsequent layers, inevitably introducing redundancy, especially for high-dimensional hyperspectral (HS) images. To tackle this problem, we propose a two-branch cross-feedback dense network with context-aware guided attention (CFDcagaNet) for HS super-resolution (HSSR), which allows the network to learn the attention maps of high-level features and refine the low-level features in a feedback (FB) manner across two branches. Context-aware guided attention (CAGA) uses high-level posterior information to provide more faithful spatial-spectral guidance for low-level features, which enables CFDcagaNet to learn more effective spatial-spectral features at low levels and yield more effective spatial-spectral transfer in the network. Extensive experiments on widely used datasets demonstrate that the proposed method outperforms state-of-the-art methods in terms of both quantitative values and visual qualities. |
---|---|
ISSN: | 0196-2892 1558-0644 |
DOI: | 10.1109/TGRS.2022.3180484 |