Loading…
CDRNet: Cascaded dense residual network for grayscale and pseudocolor medical image fusion
•To the best of our knowledge, this is the first study to propose CDRNet for grayscale and pseudocolor medical image fusion.•The architecture and principle of CDRNet are introduced, and its advantages are analyzed through ablation experiments.•CDRNet is an end-to-end model. When the training process...
Saved in:
Published in: | Computer methods and programs in biomedicine 2023-06, Vol.234, p.107506-107506, Article 107506 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | •To the best of our knowledge, this is the first study to propose CDRNet for grayscale and pseudocolor medical image fusion.•The architecture and principle of CDRNet are introduced, and its advantages are analyzed through ablation experiments.•CDRNet is an end-to-end model. When the training process is over, the fusion image can be directly obtained after inputting the grayscale and pseudocolor medical images.•Compared with the existing methods, the fusion results of CDRNet have richer details and better objective indicators.
Multimodal medical fusion images have been widely used in clinical medicine, computer-aided diagnosis and other fields. However, the existing multimodal medical image fusion algorithms generally have shortcomings such as complex calculations, blurred details and poor adaptability. To solve this problem, we propose a cascaded dense residual network and use it for grayscale and pseudocolor medical image fusion.
The cascaded dense residual network uses a multiscale dense network and a residual network as the basic network architecture, and a multilevel converged network is obtained through cascade. The cascaded dense residual network contains 3 networks, the first-level network inputs two images with different modalities to obtain a fused Image 1, the second-level network uses fused Image 1 as the input image to obtain fused Image 2 and the third-level network uses fused Image 2 as the input image to obtain fused Image 3. The multimodal medical image is trained through each level of the network, and the output fusion image is enhanced step-by-step.
As the number of networks increases, the fusion image becomes increasingly clearer. Through numerous fusion experiments, the fused images of the proposed algorithm have higher edge strength, richer details, and better performance in the objective indicators than the reference algorithms.
Compared with the reference algorithms, the proposed algorithm has better original information, higher edge strength, richer details and an improvement of the four objective SF, AG, MZ and EN indicator metrics. |
---|---|
ISSN: | 0169-2607 1872-7565 |
DOI: | 10.1016/j.cmpb.2023.107506 |