Loading…

A Multilevel Hybrid Transmission Network for Infrared and Visible Image Fusion

Infrared and visible image fusion aims to generate an image with prominent target information and abundant texture details. Most existing methods generally rely on manually designing complex fusion rules to realize image fusion. Some deep learning fusion networks tend to ignore the correlation betwe...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on instrumentation and measurement 2022, Vol.71, p.1-14
Main Authors: Li, Qingqing, Han, Guangliang, Liu, Peixun, Yang, Hang, Chen, Dianbing, Sun, Xinglong, Wu, Jiajia, Liu, Dongxu
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Infrared and visible image fusion aims to generate an image with prominent target information and abundant texture details. Most existing methods generally rely on manually designing complex fusion rules to realize image fusion. Some deep learning fusion networks tend to ignore the correlation between different level features, which may cause loss of intensity information and texture details in the fused image. To overcome these drawbacks, we propose a multilevel hybrid transmission network for infrared and visible image fusion, which mainly contains the multilevel residual encoder module (MREM) and the hybrid transmission decoder module (HTDM). Considering the great difference between infrared and visible images, the MREM with two independent branches is designed to extract abundant features from source images. To avoid complicated fusion strategies, the concatenate convolution is applied to fuse features. Toward utilizing information from source images efficiently, the HTDM is constructed to integrate different level features. Experimental results and analyses on three public datasets demonstrate that our method not only can achieve high-quality image fusion, but also performs better than comparison methods in terms of qualitative and quantitative comparisons. In addition, the proposed method has good real-time performance in infrared and visible image fusion.
ISSN:0018-9456
1557-9662
DOI:10.1109/TIM.2022.3186048