Loading…
CAS‐GAN: A Novel Generative Adversarial Network‐Based Architecture for Coronary Artery Segmentation
ABSTRACT Accurate and automated segmentation of x‐ray coronary angiography (XRCA) is crucial for both diagnosing and treating coronary artery diseases. Despite the outstanding results achieved by deep learning (DL)‐based methods in this area, this task remains challenging due to several factors such...
Saved in:
Published in: | International journal of imaging systems and technology 2024-09, Vol.34 (5), p.n/a |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | ABSTRACT
Accurate and automated segmentation of x‐ray coronary angiography (XRCA) is crucial for both diagnosing and treating coronary artery diseases. Despite the outstanding results achieved by deep learning (DL)‐based methods in this area, this task remains challenging due to several factors such as poor image quality, the presence of motion artifacts, and inherent variability in vessel structure sizes. To address this challenge, this paper introduces a novel GAN‐based architecture for coronary artery segmentation using XRCA images. This architecture includes a novel U‐Net variant with two types of self‐attention blocks in the generator segment. An auxiliary path connects the attention block and the prediction block to enhance feature generalization, improving vessel structure delineation, especially thin vessels in low‐contrast regions. In parallel, the discriminator network employs a residual CNN with similar attention blocks for balanced performance and improved predictive capabilities. With a streamlined 6.74 M parameters, the resulting architecture surpasses existing methods in efficiency. We assess its efficacy on three coronary artery datasets: our private “CORONAR,” and the public “DCA1” and “CHUAC” datasets. Empirical results showcase our model's superiority across these datasets, utilizing both original and preprocessed images. Notably, our proposed architecture achieves the highest F1‐score of 0.7972 for the CHUAC dataset, 0.8245 for the DCA1 dataset, and 0.8333 for the CORONAR dataset. |
---|---|
ISSN: | 0899-9457 1098-1098 |
DOI: | 10.1002/ima.23159 |