Loading…

KNLConv: Kernel-Space Non-Local Convolution for Hyperspectral Image Super-Resolution

Pixel-level adaptive convolution, which overcomes the deficiency of the spatial-invariance of standard convolution, is always limited to performing feature extraction from local patches and ignores the latent long-range dependencies imperceptible in the feature space, which are more significant in p...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on multimedia 2024, Vol.26, p.8836-8848
Main Authors: Ran, Ran, Deng, Liang-Jian, Zhang, Tian-Jing, Chang, Jianlong, Wu, Xiao, Tian, Qi
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Pixel-level adaptive convolution, which overcomes the deficiency of the spatial-invariance of standard convolution, is always limited to performing feature extraction from local patches and ignores the latent long-range dependencies imperceptible in the feature space, which are more significant in pixel-level tasks such as hyperspectral image super-resolution (HSISR). To handle such limitations, we propose kernel-space non-local convolution (KNLConv), which explores non-local dependencies in the generated kernel space, to leverage these global information to guide the network to extract image features more flexibly. Technically, the proposed KNLConv first decomposes the convolutional kernel space into spatial and channel dimensions, and designs a depth-wise non-local expansion convolution (NLEC) in the spatial dimension of the kernel-space to explore underlying global correlations. Then introduce an adaptive point-wise convolution (APC), generalizing the NLEC to the pixel-level while integrating features in the channel dimension. In addition, applying KNLConv, we design an effective network architecture for hyperspectral image super-resolution. Extensive experiments demonstrate that our approach performs favorably against current state-of-the-art HSISR methods, both on quantitative indicators and visual quality.
ISSN:1520-9210
1941-0077
DOI:10.1109/TMM.2024.3382873