Loading…

Fusion-UDCGAN: Multifocus Image Fusion via a U-Type Densely Connected Generation Adversarial Network

Multifocus image fusion has attracted considerable attention because it can overcome the physical limitations of optical imaging equipment and fuse multiple images with different depths of the field into one full-clear image. However, most existing deep learning-based fusion methods concentrate on t...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on instrumentation and measurement 2022, Vol.71, p.1-13
Main Authors: Gao, Yuan, Ma, Shiwei, Liu, Jingjing, Xiu, Xianchao
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Multifocus image fusion has attracted considerable attention because it can overcome the physical limitations of optical imaging equipment and fuse multiple images with different depths of the field into one full-clear image. However, most existing deep learning-based fusion methods concentrate on the segmentation of focus-defocus regions, resulting in the loss of the details near the boundaries. To address the issue, this article proposes a novel generation adversarial network with dense connections (Fusion-UDCGAN) to fuse multifocus images. More specifically, the encoder and the decoder are first composed of dense modules with dense long connections to ensure the generated image's quality. The content and clarity loss based on the L1 norm and the novel sum-modified-Laplacian (NSML) is further embedded to provide the fused images retaining more texture features. Considering that the previous dataset-making approaches may lose the relation between the overall structure and the information near the boundaries, a new dataset, which is uniformly distributed and can simulate natural focusing boundary conditions, is constructed for model training. Subjective and objective experimental results indicate that the proposed method significantly improves the sharpness, contrast, and detail richness compared to several state-of-the-art methods.
ISSN:0018-9456
1557-9662
DOI:10.1109/TIM.2022.3159978