Loading…

A Progressive Implicit Neural Fusion Network for Multispectral Image Pansharpening

In the field of remote sensing, it is not feasible to obtain high spatial resolution multispectral (HRMS) images from a single satellite sensor. The existing methods use pansharpening techniques to obtain HRMS images by fusing panchromatic (PAN) and multispectral (MS) images. However, due to the sca...

Full description

Saved in:
Bibliographic Details
Published in:IEEE journal of selected topics in applied earth observations and remote sensing 2024, Vol.17, p.14935-14948
Main Authors: Feng, Yao, Zhang, Long, Zhang, Yingwei, Guo, Xinguo, Xie, Guangqi, Liu, Chuang, Xiang, Shao
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In the field of remote sensing, it is not feasible to obtain high spatial resolution multispectral (HRMS) images from a single satellite sensor. The existing methods use pansharpening techniques to obtain HRMS images by fusing panchromatic (PAN) and multispectral (MS) images. However, due to the scale difference between PAN and MS images, most pansharpening methods often use explicit sampling methods to integrate features at different scales. These explicit-based sampling techniques represent pixels as discrete points through predefined functions, rendering it challenging to fit the distribution among diverse modal data, this results in the loss of image texture details during the fusion process. Implicit neural networks can enhance the generative capability of images by incorporating pixel coordinate information, which is crucial for the fusion of remote sensing images with different spatial resolutions. Inspired by implicit neural representation, we propose a progressive implicit neural feature fusion network (PINFNet) for remote sensing images. A progressive implicit neural feature fusion is proposed; it establishes a coordinate modal relationship between the spatial and spectral information through the guidance of the high spatial features in PAN images. This enables the proposed PINFNet to progressively learn and integrate spatial and spectral information at different scales. Our method, as opposed to discrete sampling techniques, is capable of establishing a continuous representation between diverse modal data, which in turn preserves more texture detail information. Extensive experiments have shown that this approach outperforms state-of-the-art methods while maintaining high efficiency.
ISSN:1939-1404
2151-1535
DOI:10.1109/JSTARS.2024.3443400