Loading…

Diagnosing uterine cervical cancer on a single T2-weighted image: Comparison between deep learning versus radiologists

•A deep learning model using convolutional neural networks (DCNN) can diagnose uterine cervical cancer on a T2-weighted image.•The DCNN model, built from less than 300 cases, showed superb diagnostic performance equivalent to experienced radiologists.•Although the images used for training were not u...

Full description

Saved in:
Bibliographic Details
Published in:European journal of radiology 2021-02, Vol.135, p.109471-109471, Article 109471
Main Authors: Urushibara, Aiko, Saida, Tsukasa, Mori, Kensaku, Ishiguro, Toshitaka, Sakai, Masafumi, Masuoka, Souta, Satoh, Toyomi, Masumoto, Tomohiko
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•A deep learning model using convolutional neural networks (DCNN) can diagnose uterine cervical cancer on a T2-weighted image.•The DCNN model, built from less than 300 cases, showed superb diagnostic performance equivalent to experienced radiologists.•Although the images used for training were not uterine-only cropped images, the DCNN model showed high diagnostic performance. To compare deep learning with radiologists when diagnosing uterine cervical cancer on a single T2-weighted image. This study included 418 patients (age range, 21−91 years; mean, 50.2 years) who underwent magnetic resonance imaging (MRI) between June 2013 and May 2020. We included 177 patients with pathologically confirmed cervical cancer and 241 non-cancer patients. Sagittal T2-weighted images were used for analysis. A deep learning model using convolutional neural networks (DCNN), called Xception architecture, was trained with 50 epochs using 488 images from 117 cancer patients and 509 images from 181 non-cancer patients. It was tested with 60 images for 60 cancer and 60 non-cancer patients. Three blinded experienced radiologists also interpreted these 120 images independently. Sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUC) were compared between the DCNN model and radiologists. The DCNN model and the radiologists had a sensitivity of 0.883 and 0.783–0.867, a specificity of 0.933 and 0.917–0.950, and an accuracy of 0.908 and 0.867–0.892, respectively. The DCNN model had an equal to, or better, diagnostic performance than the radiologists (AUC = 0.932, and p for accuracy = 0.272−0.62). Deep learning provided diagnostic performance equivalent to experienced radiologists when diagnosing cervical cancer on a single T2-weighted image.
ISSN:0720-048X
1872-7727
DOI:10.1016/j.ejrad.2020.109471