Loading…
Segmentation and Estimation of Fetal Biometric Parameters using an Attention Gate Double U-Net with Guided Decoder Architecture
The fetus's health is evaluated with the biometric parameters obtained from the low-resolution ultrasound images. The accuracy of biometric parameters in existing protocols typically depends on conventional image processing approaches and hence, is prone to error. This study introduces the Atte...
Saved in:
Published in: | Computers in biology and medicine 2024-09, Vol.180, p.109000, Article 109000 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The fetus's health is evaluated with the biometric parameters obtained from the low-resolution ultrasound images. The accuracy of biometric parameters in existing protocols typically depends on conventional image processing approaches and hence, is prone to error. This study introduces the Attention Gate Double U-Net with Guided Decoder (ADU-GD) model specifically crafted for fetal biometric parameter prediction. The attention network and guided decoder are specifically designed to dynamically merge local features with their global dependencies, enhancing the precision of parameter estimation. The ADU-GD displays superior performance with Mean Absolute Error of 0.99 mm and segmentation accuracy of 99.1 % when benchmarked against the well-established models. The proposed model consistently achieved a high Dice index score of about 99.1 ± 0.8, with a minimal Hausdorff distance of about 1.01 ± 1.07 and a low Average Symmetric Surface Distance of about 0.25 ± 0.21, demonstrating the model's excellence. In a comprehensive evaluation, ADU-GD emerged as a frontrunner, outperforming existing deep-learning models such as Double U-Net, DeepLabv3, FCN-32s, PSPNet, SegNet, Trans U-Net, Swin U-Net, Mask-R2CNN, and RDHCformer models in terms of Mean Absolute Error for crucial fetal dimensions, including Head Circumference, Abdomen Circumference, Femur Length, and BiParietal Diameter. It achieved superior accuracy with MAE values of 2.2 mm, 2.6 mm, 0.6 mm, and 1.2 mm, respectively.
•Proposes ADU-GD, a novel deep learning architecture for precise fetal ultrasound image segmentation and biometric measurement.•Improved segmentation accuracy, especially in the first trimester, using attention mechanism and deep supervision.•Boosts the model's precision in identifying complex structures by integrating attention-gated modules for clear delineation. |
---|---|
ISSN: | 0010-4825 1879-0534 1879-0534 |
DOI: | 10.1016/j.compbiomed.2024.109000 |