Loading…

A Spatial-Spectral Dual-Optimization Model-Driven Deep Network for Hyperspectral and Multispectral Image Fusion

Deep learning, especially convolutional neural networks (CNNs), has shown very promising results for multispectral (MS) and hyperspectral (HS) image fusion (MS/HS fusion) task. Most of the existing CNN methods are based on "black-box" models that are not specifically designed for MS/HS fus...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on geoscience and remote sensing 2022, Vol.60, p.1-16
Main Authors: Dong, Wenqian, Zhang, Tongzhen, Qu, Jiahui, Li, Yunsong, Xia, Haoming
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Deep learning, especially convolutional neural networks (CNNs), has shown very promising results for multispectral (MS) and hyperspectral (HS) image fusion (MS/HS fusion) task. Most of the existing CNN methods are based on "black-box" models that are not specifically designed for MS/HS fusion, which largely ignore the priors evidently possessed by the observed HS and MS images and lack clear interpretability, leaving room for further improvement. In this article, we propose an interpretable network, named spatial-spectral dual-optimization model-driven deep network ( \text{S}{^{\mathrm{2}}} DMDN), which embeds the intrinsic generation mechanism of the MS/HS fusion to the network. There are two key characteristics: 1) explicitly encode the spatial prior and spectral prior evidently possessed by the input MS and HS images in the network architecture and 2) unfold an iterative spatial-spectral dual-optimization algorithm into a model-driven deep network. The benefit is that the network has good interpretability and generalization capability, and the fused image is richer in semantics and more precise in spatial. Extensive experiments are conducted to prove the superiority of our proposed method over other state-of-the-art methods in terms of quantitative evaluation metrics and qualitative visual effects.
ISSN:0196-2892
1558-0644
DOI:10.1109/TGRS.2022.3217542