Loading…
Glaucoma assessment from color fundus images using convolutional neural network
Early detection and proper screening are essential to prevent vision loss due to glaucoma. In recent years, convolutional neural network (CNN) has been successfully applied to the color fundus images for the automatic detection of glaucoma. Compared to the existing automatic screening methods, CNNs...
Saved in:
Published in: | International journal of imaging systems and technology 2021-06, Vol.31 (2), p.955-971 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Early detection and proper screening are essential to prevent vision loss due to glaucoma. In recent years, convolutional neural network (CNN) has been successfully applied to the color fundus images for the automatic detection of glaucoma. Compared to the existing automatic screening methods, CNNs have the ability to extract the distinct features directly from the fundus images. In this paper, a deep learning architecture based on a CNN is designed for the classification of glaucomatous and normal fundus images. An 18–layer CNN is designed and trained to extract the discriminative features from the fundus image. It comprises of four convolutional layers, two max pooling layers, and one fully connected layer. A two–stage tuning approach is proposed for the selection of suitable batch size and initial learning rate. The proposed network is tested on DRISHTI–GS1, ORIGA, RIM–ONE2 (release 2), ACRIMA, and large–scale attention–based glaucoma (LAG) databases. Rotation data augmentation technique is employed to enlarge the dataset. Randomly selected 70% of images are used for training the model and remaining 30% images are used for testing. An overall accuracy of 86.62%, 85.97%, 78.32%, 94.43%, and 96.64% are obtained on DRISHTI–GS1, RIM–ONE2, ORIGA, LAG, and ACRIMA databases, respectively. The proposed method has achieved an accuracy, sensitivity, specificity, and precision of 96.64%, 96.07%, 97.39%, and 97.74%, respectively, for ACRIMA database. Compared to other existing architectures, the proposed method is robust to Gaussian noise and salt–and–pepper noise. |
---|---|
ISSN: | 0899-9457 1098-1098 |
DOI: | 10.1002/ima.22494 |