Loading…

MicroSegNet: A deep learning approach for prostate segmentation on micro-ultrasound images

Micro-ultrasound (micro-US) is a novel 29-MHz ultrasound technique that provides 3-4 times higher resolution than traditional ultrasound, potentially enabling low-cost, accurate diagnosis of prostate cancer. Accurate prostate segmentation is crucial for prostate volume measurement, cancer diagnosis,...

Full description

Saved in:
Bibliographic Details
Published in:Computerized medical imaging and graphics 2024-03, Vol.112, p.102326-102326, Article 102326
Main Authors: Jiang, Hongxu, Imran, Muhammad, Muralidharan, Preethika, Patel, Anjali, Pensa, Jake, Liang, Muxuan, Benidir, Tarik, Grajo, Joseph R., Joseph, Jason P., Terry, Russell, DiBianco, John Michael, Su, Li-Ming, Zhou, Yuyin, Brisbane, Wayne G., Shao, Wei
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Micro-ultrasound (micro-US) is a novel 29-MHz ultrasound technique that provides 3-4 times higher resolution than traditional ultrasound, potentially enabling low-cost, accurate diagnosis of prostate cancer. Accurate prostate segmentation is crucial for prostate volume measurement, cancer diagnosis, prostate biopsy, and treatment planning. However, prostate segmentation on micro-US is challenging due to artifacts and indistinct borders between the prostate, bladder, and urethra in the midline. This paper presents MicroSegNet, a multi-scale annotation-guided transformer UNet model designed specifically to tackle these challenges. During the training process, MicroSegNet focuses more on regions that are hard to segment (hard regions), characterized by discrepancies between expert and non-expert annotations. We achieve this by proposing an annotation-guided binary cross entropy (AG-BCE) loss that assigns a larger weight to prediction errors in hard regions and a lower weight to prediction errors in easy regions. The AG-BCE loss was seamlessly integrated into the training process through the utilization of multi-scale deep supervision, enabling MicroSegNet to capture global contextual dependencies and local information at various scales. We trained our model using micro-US images from 55 patients, followed by evaluation on 20 patients. Our MicroSegNet model achieved a Dice coefficient of 0.939 and a Hausdorff distance of 2.02 mm, outperforming several state-of-the-art segmentation methods, as well as three human annotators with different experience levels. Our code is publicly available at https://github.com/mirthAI/MicroSegNet and our dataset is publicly available at https://zenodo.org/records/10475293. •First deep learning model for automated prostate segmentation on micro-ultrasound.•A novel annotation-guided segmentation loss that prioritizes hard-to-segment regions.•A dataset with micro-ultrasound and human prostate annotations is publicly available.
ISSN:0895-6111
1879-0771
DOI:10.1016/j.compmedimag.2024.102326