Loading…

Deep dive in retinal fundus image segmentation using deep learning for retinopathy of prematurity

Segmentation of retinal structures, namely optic disc, vessel, demarcation line, and ridge, is essential for describing the characteristics of Retinopathy of Prematurity (ROP). Computerized systems are being developed for automatic segmentation in fundus images to assist the medical experts and brin...

Full description

Saved in:
Bibliographic Details
Published in:Multimedia tools and applications 2022-03, Vol.81 (8), p.11441-11460
Main Authors: Agrawal, Ranjana, Kulkarni, Sucheta, Walambe, Rahee, Deshpande, Madan, Kotecha, Ketan
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Segmentation of retinal structures, namely optic disc, vessel, demarcation line, and ridge, is essential for describing the characteristics of Retinopathy of Prematurity (ROP). Computerized systems are being developed for automatic segmentation in fundus images to assist the medical experts and bring consistency in the diagnosis. There are multiple challenges in the segmentation task of premature infants’ fundus images. The annotation and ground truth preparation required for the segmentation is complex, challenging, and expensive. Further, ROP datasets are not available publicly, and hence carrying out such a task needs a primary dataset and significant assistance from the domain expert. To address this gap, two primary datasets named HVDROPDB-BV and HVDROPDB-RIDGE were developed. The datasets consist of images captured by two different imaging systems, having different sizes, resolutions, and illumination. This made the trained models generic and robust to data variability and heterogeneity. We propose the modified U-Net architectures by incorporating squeeze and excitation (SE) blocks and attention gates (AG) to segment the demarcation line/ridge and vessel from these datasets. These modifications were tested and validated by ROP experts. The performance of all the three networks (U-Net, AG U-Net, and SE U-Net) was promising, with a variation of 1 to 6% in the dice coefficient for the HVDROPDB datasets. The area under the curve (AUC) obtained for all three networks was above 0.94, indicating them as excellent models. AG U-Net outperformed the other two networks, with a sensitivity of 96% and specificity of 89% for stage detection via new test images.
ISSN:1380-7501
1573-7721
DOI:10.1007/s11042-022-12396-z