Loading…

WDTISeg: One-Stage Interactive Segmentation for Breast Ultrasound Image Using Weighted Distance Transform and Shape-Aware Compound Loss

Accurate tumor segmentation is important for aided diagnosis using breast ultrasound. Interactive segmentation methods can obtain highly accurate results by continuously optimizing the segmentation result via user interactions. However, traditional interactive segmentation methods usually require a...

Full description

Saved in:
Bibliographic Details
Published in:Applied sciences 2021-07, Vol.11 (14), p.6279
Main Authors: Li, Xiaokang, Qiao, Mengyun, Guo, Yi, Zhou, Jin, Zhou, Shichong, Chang, Cai, Wang, Yuanyuan
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Accurate tumor segmentation is important for aided diagnosis using breast ultrasound. Interactive segmentation methods can obtain highly accurate results by continuously optimizing the segmentation result via user interactions. However, traditional interactive segmentation methods usually require a large number of interactions to make the result meet the requirements due to the performance limitations of the underlying model. With greater ability in extracting image information, convolutional neural network (CNN)-based interactive segmentation methods have been shown to effectively reduce the number of user interactions. In this paper, we proposed a one-stage interactive segmentation framework (interactive segmentation using weighted distance transform, WDTISeg) for breast ultrasound image using weighted distance transform and shape-aware compound loss. First, we used a pre-trained CNN to attain an initial automatic segmentation, based on which the user provided interaction points of mis-segmented areas. Then, we combined Euclidean distance transform and geodesic distance transform to convert interaction points into weighted distance maps to transfer segmentation guidance information to the model. The same CNN accepted the input image, the initial segmentation, and weighted distance maps as a concatenation input and provided a refined result, without another additional segmentation network. In addition, a shape-aware compound loss function using prior knowledge was designed to reduce the number of user interactions. In the testing phase on 200 cases, our method achieved a dice of 82.86 ± 16.22 (%) for automatic segmentation task and a dice of 94.45 ± 3.26 (%) for interactive segmentation task after 8 interactions. The results of comparative experiments proved that our method could obtain higher accuracy with fewer simple interactions than other interactive segmentation methods.
ISSN:2076-3417
2076-3417
DOI:10.3390/app11146279