Loading…

Multi-perspective feature compensation enhanced network for medical image segmentation

Medical image segmentation’s accuracy is crucial for clinical analysis and diagnosis. Despite progress with U-Net-inspired models, they often underuse multi-scale convolutional layers crucial for enhancing detailing visual features and overlooking the importance of merging multi-scale features withi...

Full description

Saved in:
Bibliographic Details
Published in:Biomedical signal processing and control 2025-02, Vol.100, p.107099, Article 107099
Main Authors: Zhu, Chengzhang, Zhang, Renmao, Xiao, Yalong, Zou, Beiji, Yang, Zhangzheng, Li, Jianfeng, Li, Xinze
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Medical image segmentation’s accuracy is crucial for clinical analysis and diagnosis. Despite progress with U-Net-inspired models, they often underuse multi-scale convolutional layers crucial for enhancing detailing visual features and overlooking the importance of merging multi-scale features within the channel dimension to enhance decoder complexity. To address these limitations, we introduce a Multi-perspective Feature Compensation Enhanced Network (MFCNet) for medical image segmentation. Our network design is characterized by the strategic employment of dual-scale convolutional kernels at each encoder level. This synergy enables the precise capture of both granular and broader context features throughout the encoding phase. We further enhance the model by integrating a Dual-scale Channel-wise Cross-fusion Transformer (DCCT) mechanism within the skip connections. This innovation effectively integrates dual-scale features. We subsequently implemented the spatial attention (SA) mechanism to amplify the saliency areas within the dual-scale features. These enhanced features were subsequently merged with the feature map of the same level in the decoder, thereby augmenting the overall feature representation. Our proposed MFCNet has been evaluated on three distinct medical image datasets, and the experimental results demonstrate that it achieves more accurate segmentation performance and adaptability to varying target segmentation, making it more competitive compared to existing methods. The code is available at: https://github.com/zrm-code/MFCNet. •Innovative Design: The MFCNet introduces dual-scale convolutional layers and utilizes the DCCT mechanism to capture both fine-grained and coarse-grained features of images, thereby enhancing decoding complexity.•Feature Fusion: By employing dual-scale channel cross-fusion and spatial attention mechanisms, the optimization of multi-scale feature fusion is achieved, leading to improved accuracy in medical image segmentation.•Performance Enhancement: Experimental results indicate that the MFCNet surpasses existing methods in accuracy and adaptability to diverse segmentation targets, demonstrating a significant advantage.•Wide Applicability: Evaluation on three medical image datasets confirms the practicality of MFCNet in clinical analysis and diagnosis.•Competitive Edge: Through the integration of multi-perspective feature compensation strategies, the MFCNet exhibits enhanced competitiveness in complex medical image s
ISSN:1746-8094
DOI:10.1016/j.bspc.2024.107099