Loading…

Diabetic retinopathy detection and stage classification in eye fundus images using active deep learning

Retinal fundus image analysis (RFIA) for diabetic retinopathy (DR) screening can be used to reduce the risk of blindness among diabetic patients. The RFIA screening programs help the ophthalmologists to cope with this paramount visual impairment problem. In this article, an automatic recognition of...

Full description

Saved in:
Bibliographic Details
Published in:Multimedia tools and applications 2021-03, Vol.80 (8), p.11691-11721
Main Authors: Qureshi, Imran, Ma, Jun, Abbas, Qaisar
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Retinal fundus image analysis (RFIA) for diabetic retinopathy (DR) screening can be used to reduce the risk of blindness among diabetic patients. The RFIA screening programs help the ophthalmologists to cope with this paramount visual impairment problem. In this article, an automatic recognition of the DR stage is proposed based on a new multi-layer architecture of active deep learning (ADL). To develop the ADL system, we used the convolutional neural networks (CNN) model to automatically extract features compare to handcrafted-based features. However, the training of CNN procedure requires an immense size of labeled data that makes it almost difficult in the classification phase. As a result, a label-efficient CNN architecture is presented known as ADL-CNN by using one of the active learning methods known as an expected gradient length (EGL). This ADL-CNN model can be seen as a two-stage process. At first, the proposed ADL-CNN system selects both the most informative patches and images by using some ground truth labels of training samples to learn the simple to complex retinal features. Next, it provides useful masks for prognostication to assist clinical specialists for the important eye sample annotation and segment regions-of-interest within the retinograph image to grade five severity-levels of diabetic retinopathy. To test and evaluate the performance of ADL-CNN model, the EyePACS benchmark is utilized and compared with state-of-the-art methods. The statistical metrics are used such as sensitivity (SE), specificity (SP), F-measure and classification accuracy (ACC) to measure the effectiveness of ADL-CNN system. On 54,000 retinograph images, the ADL-CNN model achieved an average SE of 92.20%, SP of 95.10%, F-measure of 93% and ACC of 98%. Hence, the new ADL-CNN architecture is outperformed for detecting DR-related lesions and recognizing the five levels of severity of DR on a wide range of fundus images.
ISSN:1380-7501
1573-7721
DOI:10.1007/s11042-020-10238-4