Loading…

Synergic Adversarial Label Learning for Grading Retinal Diseases via Knowledge Distillation and Multi-Task Learning

The need for comprehensive and automated screening methods for retinal image classification has long been recognized. Well-qualified doctors annotated images are very expensive and only a limited amount of data is available for various retinal diseases such as diabetic retinopathy (DR) and age-relat...

Full description

Saved in:
Bibliographic Details
Published in:IEEE journal of biomedical and health informatics 2021-10, Vol.25 (10), p.3709-3720
Main Authors: Ju, Lie, Wang, Xin, Zhao, Xin, Lu, Huimin, Mahapatra, Dwarikanath, Bonnington, Paul, Ge, Zongyuan
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The need for comprehensive and automated screening methods for retinal image classification has long been recognized. Well-qualified doctors annotated images are very expensive and only a limited amount of data is available for various retinal diseases such as diabetic retinopathy (DR) and age-related macular degeneration (AMD). Some studies show that some retinal diseases such as DR and AMD share some common features like haemorrhages and exudation but most classification algorithms only train those disease models independently when the only single label for one image is available. Inspired by multi-task learning where additional monitoring signals from various sources is beneficial to train a robust model. We propose a method called synergic adversarial label learning (SALL) which leverages relevant retinal disease labels in both semantic and feature space as additional signals and train the model in a collaborative manner using knowledge distillation. Our experiments on DR and AMD fundus image classification task demonstrate that the proposed method can significantly improve the accuracy of the model for grading diseases by 5.91% and 3.69% respectively. In addition, we conduct additional experiments to show the effectiveness of SALL from the aspects of reliability and interpretability in the context of medical imaging application.
ISSN:2168-2194
2168-2208
DOI:10.1109/JBHI.2021.3052916