Loading…
Enhanced Structure Preservation and Multi-View Approach in Unsupervised Domain Adaptation for Optic Disc and Cup Segmentation
In addressing the risk of blindness caused by glaucoma, precise and rapid segmentation of the optic disc and cup is vital for early detection and monitoring. However, manual segmentation, the standard approach, is inefficient and subjective, varying with expert experience and expertise. To overcome...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In addressing the risk of blindness caused by glaucoma, precise and rapid segmentation of the optic disc and cup is vital for early detection and monitoring. However, manual segmentation, the standard approach, is inefficient and subjective, varying with expert experience and expertise. To overcome this limitation, developing automated segmentation methods is essential. Despite advancements in deep learning in this field, performance declines when applied across different domains, impeding practical use. Previous studies have struggled to preserve the structural information of source images and overlooked variations in the visual characteristics of fundus images even within the same center. To this end, we propose an effective image-level unsupervised domain adaptation (UDA) framework to enhance optic disc and cup segmentation. This framework generates pseudo-target domain images via image-to-image translation from source domain images. It addresses structural preservation challenges by incorporating a spatially correlative loss in the QS-Attn translation model. Furthermore, we use multi-view image translation with CycleGAN to enhance the visual diversity of the translated images, benefiting the segmentation model. The synergy of these models produces a robust training set, improving the performance of the segmentation model. Our experiments on the RIGA+ dataset demonstrate that our framework outperforms current state-of-the-art methods in the segmentation performance. |
---|---|
ISSN: | 1945-8452 |
DOI: | 10.1109/ISBI56570.2024.10635127 |