Loading…

Diagnosing uterine cervical cancer on a single T2-weighted image: Comparison between deep learning versus radiologists

•A deep learning model using convolutional neural networks (DCNN) can diagnose uterine cervical cancer on a T2-weighted image.•The DCNN model, built from less than 300 cases, showed superb diagnostic performance equivalent to experienced radiologists.•Although the images used for training were not u...

Full description

Saved in:
Bibliographic Details
Published in:European journal of radiology 2021-02, Vol.135, p.109471-109471, Article 109471
Main Authors: Urushibara, Aiko, Saida, Tsukasa, Mori, Kensaku, Ishiguro, Toshitaka, Sakai, Masafumi, Masuoka, Souta, Satoh, Toyomi, Masumoto, Tomohiko
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c425t-5ced27f0a6e73dd90a1794e6e3b832bc2369355c4b103d1b3a44900ab78b073c3
cites cdi_FETCH-LOGICAL-c425t-5ced27f0a6e73dd90a1794e6e3b832bc2369355c4b103d1b3a44900ab78b073c3
container_end_page 109471
container_issue
container_start_page 109471
container_title European journal of radiology
container_volume 135
creator Urushibara, Aiko
Saida, Tsukasa
Mori, Kensaku
Ishiguro, Toshitaka
Sakai, Masafumi
Masuoka, Souta
Satoh, Toyomi
Masumoto, Tomohiko
description •A deep learning model using convolutional neural networks (DCNN) can diagnose uterine cervical cancer on a T2-weighted image.•The DCNN model, built from less than 300 cases, showed superb diagnostic performance equivalent to experienced radiologists.•Although the images used for training were not uterine-only cropped images, the DCNN model showed high diagnostic performance. To compare deep learning with radiologists when diagnosing uterine cervical cancer on a single T2-weighted image. This study included 418 patients (age range, 21−91 years; mean, 50.2 years) who underwent magnetic resonance imaging (MRI) between June 2013 and May 2020. We included 177 patients with pathologically confirmed cervical cancer and 241 non-cancer patients. Sagittal T2-weighted images were used for analysis. A deep learning model using convolutional neural networks (DCNN), called Xception architecture, was trained with 50 epochs using 488 images from 117 cancer patients and 509 images from 181 non-cancer patients. It was tested with 60 images for 60 cancer and 60 non-cancer patients. Three blinded experienced radiologists also interpreted these 120 images independently. Sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUC) were compared between the DCNN model and radiologists. The DCNN model and the radiologists had a sensitivity of 0.883 and 0.783–0.867, a specificity of 0.933 and 0.917–0.950, and an accuracy of 0.908 and 0.867–0.892, respectively. The DCNN model had an equal to, or better, diagnostic performance than the radiologists (AUC = 0.932, and p for accuracy = 0.272−0.62). Deep learning provided diagnostic performance equivalent to experienced radiologists when diagnosing cervical cancer on a single T2-weighted image.
doi_str_mv 10.1016/j.ejrad.2020.109471
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2471462116</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0720048X20306616</els_id><sourcerecordid>2471462116</sourcerecordid><originalsourceid>FETCH-LOGICAL-c425t-5ced27f0a6e73dd90a1794e6e3b832bc2369355c4b103d1b3a44900ab78b073c3</originalsourceid><addsrcrecordid>eNp9kMtKxDAUQIMoOj6-QJAs3XTMq00ruJDxCYIbBXchTe7UDJ1mTNoZ_HtTR12azQ2Xc18HoVNKppTQ4mIxhUXQdsoIGzOVkHQHTWgpWSYlk7toQiQjGRHl2wE6jHFBCMlFxfbRAU-vlHk1Qesbp5vOR9c1eOghuA6wgbB2RrfY6C79se-wxiPRAn5h2QZc896DxW6pG7jEM79c6eBiwmroNwAdtgAr3IIO3dh3DSEOEaddnW9942Ifj9HeXLcRTn7iEXq9u32ZPWRPz_ePs-unzAiW91luwDI5J7oAya2tiKayElAAr0vOasN4UfE8N6KmhFtacy1ERYiuZVkTyQ0_QufbvqvgPwaIvVq6aKBtdQd-iIolaaJglBYJ5VvUBB9jgLlahXRh-FSUqFG4Wqhv4WoUrrbCU9XZz4ChXoL9q_k1nICrLQDpzLWDoKJxkLxaF8D0ynr374AvtfOTsw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2471462116</pqid></control><display><type>article</type><title>Diagnosing uterine cervical cancer on a single T2-weighted image: Comparison between deep learning versus radiologists</title><source>ScienceDirect Freedom Collection</source><creator>Urushibara, Aiko ; Saida, Tsukasa ; Mori, Kensaku ; Ishiguro, Toshitaka ; Sakai, Masafumi ; Masuoka, Souta ; Satoh, Toyomi ; Masumoto, Tomohiko</creator><creatorcontrib>Urushibara, Aiko ; Saida, Tsukasa ; Mori, Kensaku ; Ishiguro, Toshitaka ; Sakai, Masafumi ; Masuoka, Souta ; Satoh, Toyomi ; Masumoto, Tomohiko</creatorcontrib><description>•A deep learning model using convolutional neural networks (DCNN) can diagnose uterine cervical cancer on a T2-weighted image.•The DCNN model, built from less than 300 cases, showed superb diagnostic performance equivalent to experienced radiologists.•Although the images used for training were not uterine-only cropped images, the DCNN model showed high diagnostic performance. To compare deep learning with radiologists when diagnosing uterine cervical cancer on a single T2-weighted image. This study included 418 patients (age range, 21−91 years; mean, 50.2 years) who underwent magnetic resonance imaging (MRI) between June 2013 and May 2020. We included 177 patients with pathologically confirmed cervical cancer and 241 non-cancer patients. Sagittal T2-weighted images were used for analysis. A deep learning model using convolutional neural networks (DCNN), called Xception architecture, was trained with 50 epochs using 488 images from 117 cancer patients and 509 images from 181 non-cancer patients. It was tested with 60 images for 60 cancer and 60 non-cancer patients. Three blinded experienced radiologists also interpreted these 120 images independently. Sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUC) were compared between the DCNN model and radiologists. The DCNN model and the radiologists had a sensitivity of 0.883 and 0.783–0.867, a specificity of 0.933 and 0.917–0.950, and an accuracy of 0.908 and 0.867–0.892, respectively. The DCNN model had an equal to, or better, diagnostic performance than the radiologists (AUC = 0.932, and p for accuracy = 0.272−0.62). Deep learning provided diagnostic performance equivalent to experienced radiologists when diagnosing cervical cancer on a single T2-weighted image.</description><identifier>ISSN: 0720-048X</identifier><identifier>EISSN: 1872-7727</identifier><identifier>DOI: 10.1016/j.ejrad.2020.109471</identifier><identifier>PMID: 33338759</identifier><language>eng</language><publisher>Ireland: Elsevier B.V</publisher><subject>Adult ; Aged ; Aged, 80 and over ; Artificial intelligence ; Cervical carcinoma ; CNN ; Convolutional neural network ; Deep Learning ; Female ; Humans ; Magnetic resonance imaging ; Middle Aged ; Neural Networks, Computer ; Radiologists ; Retrospective Studies ; T2WI ; Uterine Cervical Neoplasms - diagnostic imaging ; Young Adult</subject><ispartof>European journal of radiology, 2021-02, Vol.135, p.109471-109471, Article 109471</ispartof><rights>2020 Elsevier B.V.</rights><rights>Copyright © 2020 Elsevier B.V. All rights reserved.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c425t-5ced27f0a6e73dd90a1794e6e3b832bc2369355c4b103d1b3a44900ab78b073c3</citedby><cites>FETCH-LOGICAL-c425t-5ced27f0a6e73dd90a1794e6e3b832bc2369355c4b103d1b3a44900ab78b073c3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/33338759$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Urushibara, Aiko</creatorcontrib><creatorcontrib>Saida, Tsukasa</creatorcontrib><creatorcontrib>Mori, Kensaku</creatorcontrib><creatorcontrib>Ishiguro, Toshitaka</creatorcontrib><creatorcontrib>Sakai, Masafumi</creatorcontrib><creatorcontrib>Masuoka, Souta</creatorcontrib><creatorcontrib>Satoh, Toyomi</creatorcontrib><creatorcontrib>Masumoto, Tomohiko</creatorcontrib><title>Diagnosing uterine cervical cancer on a single T2-weighted image: Comparison between deep learning versus radiologists</title><title>European journal of radiology</title><addtitle>Eur J Radiol</addtitle><description>•A deep learning model using convolutional neural networks (DCNN) can diagnose uterine cervical cancer on a T2-weighted image.•The DCNN model, built from less than 300 cases, showed superb diagnostic performance equivalent to experienced radiologists.•Although the images used for training were not uterine-only cropped images, the DCNN model showed high diagnostic performance. To compare deep learning with radiologists when diagnosing uterine cervical cancer on a single T2-weighted image. This study included 418 patients (age range, 21−91 years; mean, 50.2 years) who underwent magnetic resonance imaging (MRI) between June 2013 and May 2020. We included 177 patients with pathologically confirmed cervical cancer and 241 non-cancer patients. Sagittal T2-weighted images were used for analysis. A deep learning model using convolutional neural networks (DCNN), called Xception architecture, was trained with 50 epochs using 488 images from 117 cancer patients and 509 images from 181 non-cancer patients. It was tested with 60 images for 60 cancer and 60 non-cancer patients. Three blinded experienced radiologists also interpreted these 120 images independently. Sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUC) were compared between the DCNN model and radiologists. The DCNN model and the radiologists had a sensitivity of 0.883 and 0.783–0.867, a specificity of 0.933 and 0.917–0.950, and an accuracy of 0.908 and 0.867–0.892, respectively. The DCNN model had an equal to, or better, diagnostic performance than the radiologists (AUC = 0.932, and p for accuracy = 0.272−0.62). Deep learning provided diagnostic performance equivalent to experienced radiologists when diagnosing cervical cancer on a single T2-weighted image.</description><subject>Adult</subject><subject>Aged</subject><subject>Aged, 80 and over</subject><subject>Artificial intelligence</subject><subject>Cervical carcinoma</subject><subject>CNN</subject><subject>Convolutional neural network</subject><subject>Deep Learning</subject><subject>Female</subject><subject>Humans</subject><subject>Magnetic resonance imaging</subject><subject>Middle Aged</subject><subject>Neural Networks, Computer</subject><subject>Radiologists</subject><subject>Retrospective Studies</subject><subject>T2WI</subject><subject>Uterine Cervical Neoplasms - diagnostic imaging</subject><subject>Young Adult</subject><issn>0720-048X</issn><issn>1872-7727</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNp9kMtKxDAUQIMoOj6-QJAs3XTMq00ruJDxCYIbBXchTe7UDJ1mTNoZ_HtTR12azQ2Xc18HoVNKppTQ4mIxhUXQdsoIGzOVkHQHTWgpWSYlk7toQiQjGRHl2wE6jHFBCMlFxfbRAU-vlHk1Qesbp5vOR9c1eOghuA6wgbB2RrfY6C79se-wxiPRAn5h2QZc896DxW6pG7jEM79c6eBiwmroNwAdtgAr3IIO3dh3DSEOEaddnW9942Ifj9HeXLcRTn7iEXq9u32ZPWRPz_ePs-unzAiW91luwDI5J7oAya2tiKayElAAr0vOasN4UfE8N6KmhFtacy1ERYiuZVkTyQ0_QufbvqvgPwaIvVq6aKBtdQd-iIolaaJglBYJ5VvUBB9jgLlahXRh-FSUqFG4Wqhv4WoUrrbCU9XZz4ChXoL9q_k1nICrLQDpzLWDoKJxkLxaF8D0ynr374AvtfOTsw</recordid><startdate>202102</startdate><enddate>202102</enddate><creator>Urushibara, Aiko</creator><creator>Saida, Tsukasa</creator><creator>Mori, Kensaku</creator><creator>Ishiguro, Toshitaka</creator><creator>Sakai, Masafumi</creator><creator>Masuoka, Souta</creator><creator>Satoh, Toyomi</creator><creator>Masumoto, Tomohiko</creator><general>Elsevier B.V</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope></search><sort><creationdate>202102</creationdate><title>Diagnosing uterine cervical cancer on a single T2-weighted image: Comparison between deep learning versus radiologists</title><author>Urushibara, Aiko ; Saida, Tsukasa ; Mori, Kensaku ; Ishiguro, Toshitaka ; Sakai, Masafumi ; Masuoka, Souta ; Satoh, Toyomi ; Masumoto, Tomohiko</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c425t-5ced27f0a6e73dd90a1794e6e3b832bc2369355c4b103d1b3a44900ab78b073c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Adult</topic><topic>Aged</topic><topic>Aged, 80 and over</topic><topic>Artificial intelligence</topic><topic>Cervical carcinoma</topic><topic>CNN</topic><topic>Convolutional neural network</topic><topic>Deep Learning</topic><topic>Female</topic><topic>Humans</topic><topic>Magnetic resonance imaging</topic><topic>Middle Aged</topic><topic>Neural Networks, Computer</topic><topic>Radiologists</topic><topic>Retrospective Studies</topic><topic>T2WI</topic><topic>Uterine Cervical Neoplasms - diagnostic imaging</topic><topic>Young Adult</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Urushibara, Aiko</creatorcontrib><creatorcontrib>Saida, Tsukasa</creatorcontrib><creatorcontrib>Mori, Kensaku</creatorcontrib><creatorcontrib>Ishiguro, Toshitaka</creatorcontrib><creatorcontrib>Sakai, Masafumi</creatorcontrib><creatorcontrib>Masuoka, Souta</creatorcontrib><creatorcontrib>Satoh, Toyomi</creatorcontrib><creatorcontrib>Masumoto, Tomohiko</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>European journal of radiology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Urushibara, Aiko</au><au>Saida, Tsukasa</au><au>Mori, Kensaku</au><au>Ishiguro, Toshitaka</au><au>Sakai, Masafumi</au><au>Masuoka, Souta</au><au>Satoh, Toyomi</au><au>Masumoto, Tomohiko</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Diagnosing uterine cervical cancer on a single T2-weighted image: Comparison between deep learning versus radiologists</atitle><jtitle>European journal of radiology</jtitle><addtitle>Eur J Radiol</addtitle><date>2021-02</date><risdate>2021</risdate><volume>135</volume><spage>109471</spage><epage>109471</epage><pages>109471-109471</pages><artnum>109471</artnum><issn>0720-048X</issn><eissn>1872-7727</eissn><abstract>•A deep learning model using convolutional neural networks (DCNN) can diagnose uterine cervical cancer on a T2-weighted image.•The DCNN model, built from less than 300 cases, showed superb diagnostic performance equivalent to experienced radiologists.•Although the images used for training were not uterine-only cropped images, the DCNN model showed high diagnostic performance. To compare deep learning with radiologists when diagnosing uterine cervical cancer on a single T2-weighted image. This study included 418 patients (age range, 21−91 years; mean, 50.2 years) who underwent magnetic resonance imaging (MRI) between June 2013 and May 2020. We included 177 patients with pathologically confirmed cervical cancer and 241 non-cancer patients. Sagittal T2-weighted images were used for analysis. A deep learning model using convolutional neural networks (DCNN), called Xception architecture, was trained with 50 epochs using 488 images from 117 cancer patients and 509 images from 181 non-cancer patients. It was tested with 60 images for 60 cancer and 60 non-cancer patients. Three blinded experienced radiologists also interpreted these 120 images independently. Sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUC) were compared between the DCNN model and radiologists. The DCNN model and the radiologists had a sensitivity of 0.883 and 0.783–0.867, a specificity of 0.933 and 0.917–0.950, and an accuracy of 0.908 and 0.867–0.892, respectively. The DCNN model had an equal to, or better, diagnostic performance than the radiologists (AUC = 0.932, and p for accuracy = 0.272−0.62). Deep learning provided diagnostic performance equivalent to experienced radiologists when diagnosing cervical cancer on a single T2-weighted image.</abstract><cop>Ireland</cop><pub>Elsevier B.V</pub><pmid>33338759</pmid><doi>10.1016/j.ejrad.2020.109471</doi><tpages>1</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0720-048X
ispartof European journal of radiology, 2021-02, Vol.135, p.109471-109471, Article 109471
issn 0720-048X
1872-7727
language eng
recordid cdi_proquest_miscellaneous_2471462116
source ScienceDirect Freedom Collection
subjects Adult
Aged
Aged, 80 and over
Artificial intelligence
Cervical carcinoma
CNN
Convolutional neural network
Deep Learning
Female
Humans
Magnetic resonance imaging
Middle Aged
Neural Networks, Computer
Radiologists
Retrospective Studies
T2WI
Uterine Cervical Neoplasms - diagnostic imaging
Young Adult
title Diagnosing uterine cervical cancer on a single T2-weighted image: Comparison between deep learning versus radiologists
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T08%3A07%3A16IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Diagnosing%20uterine%20cervical%20cancer%20on%20a%20single%20T2-weighted%20image:%20Comparison%20between%20deep%20learning%20versus%20radiologists&rft.jtitle=European%20journal%20of%20radiology&rft.au=Urushibara,%20Aiko&rft.date=2021-02&rft.volume=135&rft.spage=109471&rft.epage=109471&rft.pages=109471-109471&rft.artnum=109471&rft.issn=0720-048X&rft.eissn=1872-7727&rft_id=info:doi/10.1016/j.ejrad.2020.109471&rft_dat=%3Cproquest_cross%3E2471462116%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c425t-5ced27f0a6e73dd90a1794e6e3b832bc2369355c4b103d1b3a44900ab78b073c3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2471462116&rft_id=info:pmid/33338759&rfr_iscdi=true