Loading…

Multitask Learning with Multiscale Residual Attention for Brain Tumor Segmentation and Classification

Automatic segmentation and classification of brain tumors are of great importance to clinical treatment. However, they are challenging due to the varied and small morphology of the tumors. In this paper, we propose a multitask multiscale residual attention network (MMRAN) to simultaneously solve the...

Full description

Saved in:
Bibliographic Details
Published in:International journal of automation and computing 2023-12, Vol.20 (6), p.897-908
Main Authors: Li, Gaoxiang, Hui, Xiao, Li, Wenjing, Luo, Yanlin
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Automatic segmentation and classification of brain tumors are of great importance to clinical treatment. However, they are challenging due to the varied and small morphology of the tumors. In this paper, we propose a multitask multiscale residual attention network (MMRAN) to simultaneously solve the problem of accurately segmenting and classifying brain tumors. The proposed MMRAN is based on U-Net, and a parallel branch is added at the end of the encoder as the classification network. First, we propose a novel multiscale residual attention module (MRAM) that can aggregate contextual features and combine channel attention and spatial attention better and add it to the shared parameter layer of MMRAN. Second, we propose a method of dynamic weight training that can improve model performance while minimizing the need for multiple experiments to determine the optimal weights for each task. Finally, prior knowledge of brain tumors is added to the postprocessing of segmented images to further improve the segmentation accuracy. We evaluated MMRAN on a brain tumor data set containing meningioma, glioma, and pituitary tumors. In terms of segmentation performance, our method achieves Dice, Hausdorff distance (HD), mean intersection over union (MIoU), and mean pixel accuracy (MPA) values of 80.03%, 6.649 mm, 84.38%, and 89.41%, respectively. In terms of classification performance, our method achieves accuracy, recall, precision, and F1-score of 89.87%, 90.44%, 88.56%, and 89.49%, respectively. Compared with other networks, MMRAN performs better in segmentation and classification, which significantly aids medical professionals in brain tumor management. The code and data set are available at https://github.com/linkenfaqiu/MMRAN.
ISSN:2731-538X
1476-8186
2731-5398
1751-8520
DOI:10.1007/s11633-022-1392-6