Loading…
Evaluation of generated synthetic OCT images in deep‐learning models for glaucoma detection
Purpose: To evaluate the use of synthetically generated OCT images for the development of deep‐learning models in glaucoma detection. Methods: Progressively Grown Generative Adversarial Network (PGGAN) models for glaucoma and healthy eyes were developed with data from 862 Asian glaucoma eyes and 990...
Saved in:
Published in: | Acta ophthalmologica (Oxford, England) England), 2022-12, Vol.100 (S275), p.n/a |
---|---|
Main Authors: | , , , , , , , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Purpose: To evaluate the use of synthetically generated OCT images for the development of deep‐learning models in glaucoma detection.
Methods: Progressively Grown Generative Adversarial Network (PGGAN) models for glaucoma and healthy eyes were developed with data from 862 Asian glaucoma eyes and 990 Asian normal eyes to generate synthetic circumpapillary OCT images. Glaucoma detection deep‐learning models were trained using 1200, 10 000, 60 000 or 200 000 of the generated images, equally split between glaucoma and normal. Detection performance was evaluated on real images from an Asian dataset of 140 eyes from 112 subjects and a Caucasian dataset of 300 eyes from 160 subjects, with half of each being glaucoma. Results were compared with a glaucoma detection model trained with real images from 600 glaucoma and 600 healthy eyes, and with global retinal nerve fibre layer (RNFL) measurements using Area Under the Curve (AUC) analysis.
Results: Glaucoma detection performance improved with increasing synthetic dataset size, from an AUC of 0.945 [95% CI: 0.917–0.974] and 0.856 [95% CI: 0.819–0.889] on the Asian and Caucasian test data respectively when 1200 synthetic images were used, to AUCs of 0.969 [95% CI: 0.949–0.987] and 0.897 [95% CI: 0.867–0.927] when 200 000 synthetic images were used. The model trained on 200 000 synthetic images was significantly better (p |
---|---|
ISSN: | 1755-375X 1755-3768 |
DOI: | 10.1111/j.1755-3768.2022.0131 |