Loading…

Research of Multimodal Medical Image Fusion Based on Parameter-Adaptive Pulse-Coupled Neural Network and Convolutional Sparse Representation

Visual effects of medical image have a great impact on clinical assistant diagnosis. At present, medical image fusion has become a powerful means of clinical application. The traditional medical image fusion methods have the problem of poor fusion results due to the loss of detailed feature informat...

Full description

Saved in:
Bibliographic Details
Published in:Computational and mathematical methods in medicine 2020, Vol.2020 (2020), p.1-13
Main Authors: Xia, Jingming, Tan, Ling, Lu, Yi
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Visual effects of medical image have a great impact on clinical assistant diagnosis. At present, medical image fusion has become a powerful means of clinical application. The traditional medical image fusion methods have the problem of poor fusion results due to the loss of detailed feature information during fusion. To deal with it, this paper proposes a new multimodal medical image fusion method based on the imaging characteristics of medical images. In the proposed method, the non-subsampled shearlet transform (NSST) decomposition is first performed on the source images to obtain high-frequency and low-frequency coefficients. The high-frequency coefficients are fused by a parameter‐adaptive pulse-coupled neural network (PAPCNN) model. The method is based on parameter adaptive and optimized connection strength β adopted to promote the performance. The low-frequency coefficients are merged by the convolutional sparse representation (CSR) model. The experimental results show that the proposed method solves the problems of difficult parameter setting and poor detail preservation of sparse representation during image fusion in traditional PCNN algorithms, and it has significant advantages in visual effect and objective indices compared with the existing mainstream fusion algorithms.
ISSN:1748-670X
1748-6718
DOI:10.1155/2020/3290136