Loading…

DTFusion: Infrared and Visible Image Fusion Based on Dense Residual PConv-ConvNeXt and Texture-Contrast Compensation

Infrared and visible image fusion aims to produce an informative fused image for the same scene by integrating the complementary information from two source images. Most deep-learning-based fusion networks utilize small kernel-size convolution to extract features from a local receptive field or desi...

Full description

Saved in:
Bibliographic Details
Published in:Sensors (Basel, Switzerland) Switzerland), 2023-12, Vol.24 (1), p.203
Main Authors: Zhou, Xinzhi, He, Min, Zhou, Dongming, Xu, Feifei, Jeon, Seunggil
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Infrared and visible image fusion aims to produce an informative fused image for the same scene by integrating the complementary information from two source images. Most deep-learning-based fusion networks utilize small kernel-size convolution to extract features from a local receptive field or design unlearnable fusion strategies to fuse features, which limits the feature representation capabilities and fusion performance of the network. Therefore, a novel end-to-end infrared and visible image fusion framework called DTFusion is proposed to address these problems. A residual PConv-ConvNeXt module (RPCM) and dense connections are introduced into the encoder network to efficiently extract features with larger receptive fields. In addition, a texture-contrast compensation module (TCCM) with gradient residuals and an attention mechanism is designed to compensate for the texture details and contrast of features. The fused features are reconstructed through four convolutional layers to generate a fused image with rich scene information. Experiments on public datasets show that DTFusion outperforms other state-of-the-art fusion methods in both subjective vision and objective metrics.
ISSN:1424-8220
1424-8220
DOI:10.3390/s24010203