Loading…

A three-dimensional feature-based fusion strategy for infrared and visible image fusion

Due to the lacking of attention to the scene’s essential characteristics, the existing fusion methods suffer from the deficiency of scene distortion. In addition, the lack of groundtruth can cause an inadequate representation of vital information. To this end, we propose a novel infrared and visible...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition 2025-01, Vol.157, p.110885, Article 110885
Main Authors: Liu, Xiaowen, Huo, Hongtao, Yang, Xin, Li, Jing
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Due to the lacking of attention to the scene’s essential characteristics, the existing fusion methods suffer from the deficiency of scene distortion. In addition, the lack of groundtruth can cause an inadequate representation of vital information. To this end, we propose a novel infrared and visible image fusion network based on three-dimensional feature fusion strategy (D3Fuse). In our method, we consider the scene semantic information in the source images and extract the commonality contents of the two images as the third-dimensional feature to extend the feature space for fusion tasks. Specifically, a commonality feature extraction module (CFEM) is designed to extract the scene commonality features. Subsequently, the scene commonality features are utilized together with modality features to construct the fusion image. Moreover, to ensure the independence and diversity of distinct features, we employ a contrastive learning strategy with multiscale PCA coding, which stretches the feature distance in an unsupervised manner, prompting the encoder to extract more discriminative information without incurring additional parameters and computational costs. Furthermore, a contrastive enhancement strategy is utilized to ensure adequate representation of modality information. The results of the qualitative and quantitative evaluations on the three datasets show that the proposed method has better visual performance and higher objective metrics with lower computational cost. The object detection experiments show that our results have superior performance on high-level semantic tasks. •A three-dimensional feature fusion strategy is proposed to extend the feature space.•We inject scene commonality features into fusion results to enhance visibility.•A contrast learning strategy is utilized to optimize the feature encoding.•A contrast enhancement strategy is adopted for modality information retention.•The experiments demonstrate performance on fusion and advanced vision tasks.
ISSN:0031-3203
DOI:10.1016/j.patcog.2024.110885