Loading…

Just Noticeable Difference-Guided Multipath Deep Attention Network for Microaneurysm Segmentation in Fundus Images

In this article, we propose parameter-free just noticeable difference (JND)-based attention module (JbAM), a JND-based attention module to accurately segment microaneurysms (MAs) in fundus images, the earliest clinical indicators of diabetic retinopathy (DR). To the best of the authors' knowled...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on instrumentation and measurement 2024, Vol.73, p.1-12
Main Authors: Bhargav, P. Rajith, Puhan, Niladri B.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In this article, we propose parameter-free just noticeable difference (JND)-based attention module (JbAM), a JND-based attention module to accurately segment microaneurysms (MAs) in fundus images, the earliest clinical indicators of diabetic retinopathy (DR). To the best of the authors' knowledge, JbAM is the first attention mechanism that efficiently captures discriminative regions of the input in the form of JND values, which are then transformed into proportionate attention scores using an adaptive sigmoid function. We propose a multipath attentive feature fusion (MAFF) block which utilizes a weighted-average fusion approach to effectively combine the attended low-level feature (LLF) and high-level feature (HLF), thus reducing the dimensionality of the concatenated feature space. To aid in the end-to-end training of JbAM-guided MAFF-integrated deep attention network, a novel attention-aware binary cross-entropy (BCE) loss (A2BL) is formulated to extract highly localized information from tiny MA regions. The analysis of the attention maps demonstrates the proposed attention network's superior focus on discriminative regions of MAs while suppressing the redundant background information, which is crucial to combat the challenging issue of false positives. The proposed JND based U-Net (JbU-Net) obtained sensitivity of 77.97% and 73.10%, dice coefficient of 49.82% and 47.72%, mean intersection over union (IoU) of 67.70% and 65.55%, and F2 -score of 54.54% and 52.08% on the benchmark fundus image datasets E-Ophtha and dataset for DR (DDR). Extensive experimental evaluations show that the proposed MA segmentation methodology consistently outperforms the existing works in terms of performance metrics at a very low false positive rate (FPR) of 0.001.
ISSN:0018-9456
1557-9662
DOI:10.1109/TIM.2024.3381663