Loading…

A feature fusion module based on complementary attention for medical image segmentation

Automated segmentation algorithms are a crucial component of medical image analysis, playing an essential role in assisting professionals to achieve accurate diagnoses. Traditional convolutional neural networks (CNNs) face challenges when dealing with complex and variable lesions: limited by the rec...

Full description

Saved in:
Bibliographic Details
Published in:Displays 2024-09, Vol.84, p.102811, Article 102811
Main Authors: Yang, Mingyue, Dong, Xiaoxuan, Zhang, Wang, Xie, Peng, Li, Chuan, Chen, Shanxiong
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Automated segmentation algorithms are a crucial component of medical image analysis, playing an essential role in assisting professionals to achieve accurate diagnoses. Traditional convolutional neural networks (CNNs) face challenges when dealing with complex and variable lesions: limited by the receptive field of convolutional operators, CNNs often struggle to capture long-range dependencies of complex lesions. The transformer’s outstanding ability to capture long-range dependencies offers a new perspective on addressing these challenges. Inspired by this, our research aims to combine the precise spatial detail extraction capabilities of CNNs with the global semantic understanding abilities of transformers. Unlike traditional fusion methods, we propose a fine-grained feature fusion strategy based on complementary attention, deeply exploring and complementarily fusing the feature representations of the encoder. Moreover, considering that merely relying on feature fusion might overlook critical texture details and key edge features in the segmentation task, we designed a feature enhancement module based on information entropy. This module emphasizes shallow texture features and edge information, enabling the model to more accurately capture and enhance multi-level details of the image, further optimizing segmentation results. Our method was tested on multiple public segmentation datasets of polyps and skin lesions,and performed better than state-of-the-art methods. Extensive qualitative experimental results indicate that our method maintains robust performance even when faced with challenging cases of narrow or blurry-boundary lesions. •A refined feature fusion method based on complementary attention has been designed to improve the overall quality of local and global feature fusion.•An information entropy-based feature enhancement module has been designed. The module utilizes information entropy and a series of convolutional operations to capture and enhance key boundary features of the image.•Based on the above designs, a new medical image segmentation network has been constructed. This network adopts parallel encoding paths with CNNs and Transformer, enabling the extraction of richer image details and more comprehensive image content.
ISSN:0141-9382
DOI:10.1016/j.displa.2024.102811