Loading…
Self-supervised multimodal reconstruction of retinal images over paired datasets
•Self-supervised multimodal reconstruction tasks are enabled through image registration.•Non-invasive deep learning pseudo-angiographies are generated from retinographies.•The generated pseudo-angiographies resemble the original angiographies.•Generating angiography images requires the recognition o...
Saved in:
Published in: | Expert systems with applications 2020-12, Vol.161, p.113674, Article 113674 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | •Self-supervised multimodal reconstruction tasks are enabled through image registration.•Non-invasive deep learning pseudo-angiographies are generated from retinographies.•The generated pseudo-angiographies resemble the original angiographies.•Generating angiography images requires the recognition of high-level retinal patterns.•Multimodal reconstruction provides relevant domain information without human labels.
Data scarcity represents an important constraint for the training of deep neural networks in medical imaging. Medical image labeling, especially if pixel-level annotations are required, is an expensive task that needs expert intervention and usually results in a reduced number of annotated samples. In contrast, extensive amounts of unlabeled data are produced in the daily clinical practice, including paired multimodal images from patients that were subjected to multiple imaging tests. This work proposes a novel self-supervised multimodal reconstruction task that takes advantage of this unlabeled multimodal data for learning about the domain without human supervision. Paired multimodal data is a rich source of clinical information that can be naturally exploited by trying to estimate one image modality from others. This multimodal reconstruction requires the recognition of domain-specific patterns that can be used to complement the training of image analysis tasks in the same domain for which annotated data is scarce.
In this work, a set of experiments is performed using a multimodal setting of retinography and fluorescein angiography pairs that offer complementary information about the eye fundus. The evaluations performed on different public datasets, which include pathological and healthy data samples, demonstrate that a network trained for self-supervised multimodal reconstruction of angiography from retinography achieves unsupervised recognition of important retinal structures. These results indicate that the proposed self-supervised task provides relevant cues for image analysis tasks in the same domain. |
---|---|
ISSN: | 0957-4174 1873-6793 |
DOI: | 10.1016/j.eswa.2020.113674 |