Loading…

Deep learning–based multimodal segmentation of oropharyngeal squamous cell carcinoma on CT and MRI using self-configuring nnU-Net

Purpose To evaluate deep learning–based segmentation models for oropharyngeal squamous cell carcinoma (OPSCC) using CT and MRI with nnU-Net. Methods This single-center retrospective study included 91 patients with OPSCC. The patients were grouped into the development ( n  = 56), test 1 ( n  = 13), a...

Full description

Saved in:
Bibliographic Details
Published in:European radiology 2024-08, Vol.34 (8), p.5389-5400
Main Authors: Choi, Yangsean, Bang, Jooin, Kim, Sang-Yeon, Seo, Minkook, Jang, Jinhee
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c375t-37788b2f4918fa2f0c04debbecea11bd1176b8bf741291b39cf48da90ff021db3
cites cdi_FETCH-LOGICAL-c375t-37788b2f4918fa2f0c04debbecea11bd1176b8bf741291b39cf48da90ff021db3
container_end_page 5400
container_issue 8
container_start_page 5389
container_title European radiology
container_volume 34
creator Choi, Yangsean
Bang, Jooin
Kim, Sang-Yeon
Seo, Minkook
Jang, Jinhee
description Purpose To evaluate deep learning–based segmentation models for oropharyngeal squamous cell carcinoma (OPSCC) using CT and MRI with nnU-Net. Methods This single-center retrospective study included 91 patients with OPSCC. The patients were grouped into the development ( n  = 56), test 1 ( n  = 13), and test 2 ( n  = 22) cohorts. In the development cohort, OPSCC was manually segmented on CT, MR, and co-registered CT-MR images, which served as the ground truth. The multimodal and multichannel input images were then trained using a self-configuring nnU-Net. For evaluation metrics, dice similarity coefficient (DSC) and mean Hausdorff distance (HD) were calculated for test cohorts. Pearson’s correlation and Bland–Altman analyses were performed between ground truth and prediction volumes. Intraclass correlation coefficients (ICCs) of radiomic features were calculated for reproducibility assessment. Results All models achieved robust segmentation performances with DSC of 0.64 ± 0.33 (CT), 0.67 ± 0.27 (MR), and 0.65 ± 0.29 (CT-MR) in test cohort 1 and 0.57 ± 0.31 (CT), 0.77 ± 0.08 (MR), and 0.73 ± 0.18 (CT-MR) in test cohort 2. No significant differences were found in DSC among the models. HD of CT-MR (1.57 ± 1.06 mm) and MR models (1.36 ± 0.61 mm) were significantly lower than that of the CT model (3.48 ± 5.0 mm) ( p  = 0.037 and p  = 0.014, respectively). The correlation coefficients between the ground truth and prediction volumes for CT, MR, and CT-MR models were 0.88, 0.93, and 0.9, respectively. MR models demonstrated excellent mean ICCs of radiomic features (0.91–0.93). Conclusion The self-configuring nnU-Net demonstrated reliable and accurate segmentation of OPSCC on CT and MRI. The multimodal CT-MR model showed promising results for the simultaneous segmentation on CT and MRI. Clinical relevance statement Deep learning–based automatic detection and segmentation of oropharyngeal squamous cell carcinoma on pre-treatment CT and MRI would facilitate radiologic response assessment and radiotherapy planning. Key Points • The nnU-Net framework produced a reliable and accurate segmentation of OPSCC on CT and MRI. • MR and CT-MR models showed higher DSC and lower Hausdorff distance than the CT model. • Correlation coefficients between the ground truth and predicted segmentation volumes were high in all the three models.
doi_str_mv 10.1007/s00330-024-10585-y
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2929056741</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2929056741</sourcerecordid><originalsourceid>FETCH-LOGICAL-c375t-37788b2f4918fa2f0c04debbecea11bd1176b8bf741291b39cf48da90ff021db3</originalsourceid><addsrcrecordid>eNp9kctu1TAQhi1ERUvhBVggS2zYhI4vaZwlOlCo1BYJtWvLdsaHVImd2sni7JB4BN6wT4LDKRexYOXb9_8znp-QFwzeMIDmJAMIARVwWTGoVV3tHpEjJgUvRyUf_7U_JE9zvgWAlsnmCTkUikvBRH1Evr1DnOiAJoU-bO-_frcmY0fHZZj7MXZmoBm3I4bZzH0MNHoaU5y-mLQLW1xf7xYzxiVTh8NAnUmuD3E0tLCba2pCRy8_n9MlF_PiNPjKxeD77ZLWixBuqiucn5EDb4aMzx_WY3Jz9v5687G6-PThfPP2onKiqedKNI1SlnvZMuUN9-BAdmgtOjSM2Y6x5tQq6xvJeMusaJ2XqjMteA-cdVYck9d73ynFuwXzrMc-r32bgOULmre8hfq06Av66h_0Ni4plO60AMVaVWqJQvE95VLMOaHXU-rHMhvNQK8R6X1EukSkf0akd0X08sF6sSN2vyW_MimA2AN5WqeE6U_t_9j-ALFAn2c</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3081981763</pqid></control><display><type>article</type><title>Deep learning–based multimodal segmentation of oropharyngeal squamous cell carcinoma on CT and MRI using self-configuring nnU-Net</title><source>Springer Nature</source><creator>Choi, Yangsean ; Bang, Jooin ; Kim, Sang-Yeon ; Seo, Minkook ; Jang, Jinhee</creator><creatorcontrib>Choi, Yangsean ; Bang, Jooin ; Kim, Sang-Yeon ; Seo, Minkook ; Jang, Jinhee</creatorcontrib><description>Purpose To evaluate deep learning–based segmentation models for oropharyngeal squamous cell carcinoma (OPSCC) using CT and MRI with nnU-Net. Methods This single-center retrospective study included 91 patients with OPSCC. The patients were grouped into the development ( n  = 56), test 1 ( n  = 13), and test 2 ( n  = 22) cohorts. In the development cohort, OPSCC was manually segmented on CT, MR, and co-registered CT-MR images, which served as the ground truth. The multimodal and multichannel input images were then trained using a self-configuring nnU-Net. For evaluation metrics, dice similarity coefficient (DSC) and mean Hausdorff distance (HD) were calculated for test cohorts. Pearson’s correlation and Bland–Altman analyses were performed between ground truth and prediction volumes. Intraclass correlation coefficients (ICCs) of radiomic features were calculated for reproducibility assessment. Results All models achieved robust segmentation performances with DSC of 0.64 ± 0.33 (CT), 0.67 ± 0.27 (MR), and 0.65 ± 0.29 (CT-MR) in test cohort 1 and 0.57 ± 0.31 (CT), 0.77 ± 0.08 (MR), and 0.73 ± 0.18 (CT-MR) in test cohort 2. No significant differences were found in DSC among the models. HD of CT-MR (1.57 ± 1.06 mm) and MR models (1.36 ± 0.61 mm) were significantly lower than that of the CT model (3.48 ± 5.0 mm) ( p  = 0.037 and p  = 0.014, respectively). The correlation coefficients between the ground truth and prediction volumes for CT, MR, and CT-MR models were 0.88, 0.93, and 0.9, respectively. MR models demonstrated excellent mean ICCs of radiomic features (0.91–0.93). Conclusion The self-configuring nnU-Net demonstrated reliable and accurate segmentation of OPSCC on CT and MRI. The multimodal CT-MR model showed promising results for the simultaneous segmentation on CT and MRI. Clinical relevance statement Deep learning–based automatic detection and segmentation of oropharyngeal squamous cell carcinoma on pre-treatment CT and MRI would facilitate radiologic response assessment and radiotherapy planning. Key Points • The nnU-Net framework produced a reliable and accurate segmentation of OPSCC on CT and MRI. • MR and CT-MR models showed higher DSC and lower Hausdorff distance than the CT model. • Correlation coefficients between the ground truth and predicted segmentation volumes were high in all the three models.</description><identifier>ISSN: 1432-1084</identifier><identifier>ISSN: 0938-7994</identifier><identifier>EISSN: 1432-1084</identifier><identifier>DOI: 10.1007/s00330-024-10585-y</identifier><identifier>PMID: 38243135</identifier><language>eng</language><publisher>Berlin/Heidelberg: Springer Berlin Heidelberg</publisher><subject>Adult ; Aged ; Cancer ; Carcinoma, Squamous Cell - diagnostic imaging ; Cell culture ; Computed tomography ; Correlation coefficient ; Correlation coefficients ; Deep Learning ; Diagnostic Radiology ; Female ; Head &amp; neck cancer ; Head and Neck ; Humans ; Image Interpretation, Computer-Assisted - methods ; Image processing ; Image segmentation ; Imaging ; Internal Medicine ; Interventional Radiology ; Magnetic resonance imaging ; Magnetic Resonance Imaging - methods ; Male ; Medical imaging ; Medicine ; Medicine &amp; Public Health ; Metric space ; Middle Aged ; Multimodal Imaging - methods ; Neuroradiology ; Oropharyngeal Neoplasms - diagnostic imaging ; Oropharyngolaryngeal carcinoma ; Patients ; Radiation therapy ; Radiology ; Radiomics ; Reproducibility of Results ; Retrospective Studies ; Segmentation ; Simultaneous discrimination learning ; Squamous cell carcinoma ; Throat cancer ; Tomography, X-Ray Computed - methods ; Ultrasound</subject><ispartof>European radiology, 2024-08, Vol.34 (8), p.5389-5400</ispartof><rights>The Author(s), under exclusive licence to European Society of Radiology 2024. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><rights>2024. The Author(s), under exclusive licence to European Society of Radiology.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c375t-37788b2f4918fa2f0c04debbecea11bd1176b8bf741291b39cf48da90ff021db3</citedby><cites>FETCH-LOGICAL-c375t-37788b2f4918fa2f0c04debbecea11bd1176b8bf741291b39cf48da90ff021db3</cites><orcidid>0000-0003-1674-7101</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27922,27923</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/38243135$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Choi, Yangsean</creatorcontrib><creatorcontrib>Bang, Jooin</creatorcontrib><creatorcontrib>Kim, Sang-Yeon</creatorcontrib><creatorcontrib>Seo, Minkook</creatorcontrib><creatorcontrib>Jang, Jinhee</creatorcontrib><title>Deep learning–based multimodal segmentation of oropharyngeal squamous cell carcinoma on CT and MRI using self-configuring nnU-Net</title><title>European radiology</title><addtitle>Eur Radiol</addtitle><addtitle>Eur Radiol</addtitle><description>Purpose To evaluate deep learning–based segmentation models for oropharyngeal squamous cell carcinoma (OPSCC) using CT and MRI with nnU-Net. Methods This single-center retrospective study included 91 patients with OPSCC. The patients were grouped into the development ( n  = 56), test 1 ( n  = 13), and test 2 ( n  = 22) cohorts. In the development cohort, OPSCC was manually segmented on CT, MR, and co-registered CT-MR images, which served as the ground truth. The multimodal and multichannel input images were then trained using a self-configuring nnU-Net. For evaluation metrics, dice similarity coefficient (DSC) and mean Hausdorff distance (HD) were calculated for test cohorts. Pearson’s correlation and Bland–Altman analyses were performed between ground truth and prediction volumes. Intraclass correlation coefficients (ICCs) of radiomic features were calculated for reproducibility assessment. Results All models achieved robust segmentation performances with DSC of 0.64 ± 0.33 (CT), 0.67 ± 0.27 (MR), and 0.65 ± 0.29 (CT-MR) in test cohort 1 and 0.57 ± 0.31 (CT), 0.77 ± 0.08 (MR), and 0.73 ± 0.18 (CT-MR) in test cohort 2. No significant differences were found in DSC among the models. HD of CT-MR (1.57 ± 1.06 mm) and MR models (1.36 ± 0.61 mm) were significantly lower than that of the CT model (3.48 ± 5.0 mm) ( p  = 0.037 and p  = 0.014, respectively). The correlation coefficients between the ground truth and prediction volumes for CT, MR, and CT-MR models were 0.88, 0.93, and 0.9, respectively. MR models demonstrated excellent mean ICCs of radiomic features (0.91–0.93). Conclusion The self-configuring nnU-Net demonstrated reliable and accurate segmentation of OPSCC on CT and MRI. The multimodal CT-MR model showed promising results for the simultaneous segmentation on CT and MRI. Clinical relevance statement Deep learning–based automatic detection and segmentation of oropharyngeal squamous cell carcinoma on pre-treatment CT and MRI would facilitate radiologic response assessment and radiotherapy planning. Key Points • The nnU-Net framework produced a reliable and accurate segmentation of OPSCC on CT and MRI. • MR and CT-MR models showed higher DSC and lower Hausdorff distance than the CT model. • Correlation coefficients between the ground truth and predicted segmentation volumes were high in all the three models.</description><subject>Adult</subject><subject>Aged</subject><subject>Cancer</subject><subject>Carcinoma, Squamous Cell - diagnostic imaging</subject><subject>Cell culture</subject><subject>Computed tomography</subject><subject>Correlation coefficient</subject><subject>Correlation coefficients</subject><subject>Deep Learning</subject><subject>Diagnostic Radiology</subject><subject>Female</subject><subject>Head &amp; neck cancer</subject><subject>Head and Neck</subject><subject>Humans</subject><subject>Image Interpretation, Computer-Assisted - methods</subject><subject>Image processing</subject><subject>Image segmentation</subject><subject>Imaging</subject><subject>Internal Medicine</subject><subject>Interventional Radiology</subject><subject>Magnetic resonance imaging</subject><subject>Magnetic Resonance Imaging - methods</subject><subject>Male</subject><subject>Medical imaging</subject><subject>Medicine</subject><subject>Medicine &amp; Public Health</subject><subject>Metric space</subject><subject>Middle Aged</subject><subject>Multimodal Imaging - methods</subject><subject>Neuroradiology</subject><subject>Oropharyngeal Neoplasms - diagnostic imaging</subject><subject>Oropharyngolaryngeal carcinoma</subject><subject>Patients</subject><subject>Radiation therapy</subject><subject>Radiology</subject><subject>Radiomics</subject><subject>Reproducibility of Results</subject><subject>Retrospective Studies</subject><subject>Segmentation</subject><subject>Simultaneous discrimination learning</subject><subject>Squamous cell carcinoma</subject><subject>Throat cancer</subject><subject>Tomography, X-Ray Computed - methods</subject><subject>Ultrasound</subject><issn>1432-1084</issn><issn>0938-7994</issn><issn>1432-1084</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNp9kctu1TAQhi1ERUvhBVggS2zYhI4vaZwlOlCo1BYJtWvLdsaHVImd2sni7JB4BN6wT4LDKRexYOXb9_8znp-QFwzeMIDmJAMIARVwWTGoVV3tHpEjJgUvRyUf_7U_JE9zvgWAlsnmCTkUikvBRH1Evr1DnOiAJoU-bO-_frcmY0fHZZj7MXZmoBm3I4bZzH0MNHoaU5y-mLQLW1xf7xYzxiVTh8NAnUmuD3E0tLCba2pCRy8_n9MlF_PiNPjKxeD77ZLWixBuqiucn5EDb4aMzx_WY3Jz9v5687G6-PThfPP2onKiqedKNI1SlnvZMuUN9-BAdmgtOjSM2Y6x5tQq6xvJeMusaJ2XqjMteA-cdVYck9d73ynFuwXzrMc-r32bgOULmre8hfq06Av66h_0Ni4plO60AMVaVWqJQvE95VLMOaHXU-rHMhvNQK8R6X1EukSkf0akd0X08sF6sSN2vyW_MimA2AN5WqeE6U_t_9j-ALFAn2c</recordid><startdate>202408</startdate><enddate>202408</enddate><creator>Choi, Yangsean</creator><creator>Bang, Jooin</creator><creator>Kim, Sang-Yeon</creator><creator>Seo, Minkook</creator><creator>Jang, Jinhee</creator><general>Springer Berlin Heidelberg</general><general>Springer Nature B.V</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QO</scope><scope>8FD</scope><scope>FR3</scope><scope>K9.</scope><scope>NAPCQ</scope><scope>P64</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0003-1674-7101</orcidid></search><sort><creationdate>202408</creationdate><title>Deep learning–based multimodal segmentation of oropharyngeal squamous cell carcinoma on CT and MRI using self-configuring nnU-Net</title><author>Choi, Yangsean ; Bang, Jooin ; Kim, Sang-Yeon ; Seo, Minkook ; Jang, Jinhee</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c375t-37788b2f4918fa2f0c04debbecea11bd1176b8bf741291b39cf48da90ff021db3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Adult</topic><topic>Aged</topic><topic>Cancer</topic><topic>Carcinoma, Squamous Cell - diagnostic imaging</topic><topic>Cell culture</topic><topic>Computed tomography</topic><topic>Correlation coefficient</topic><topic>Correlation coefficients</topic><topic>Deep Learning</topic><topic>Diagnostic Radiology</topic><topic>Female</topic><topic>Head &amp; neck cancer</topic><topic>Head and Neck</topic><topic>Humans</topic><topic>Image Interpretation, Computer-Assisted - methods</topic><topic>Image processing</topic><topic>Image segmentation</topic><topic>Imaging</topic><topic>Internal Medicine</topic><topic>Interventional Radiology</topic><topic>Magnetic resonance imaging</topic><topic>Magnetic Resonance Imaging - methods</topic><topic>Male</topic><topic>Medical imaging</topic><topic>Medicine</topic><topic>Medicine &amp; Public Health</topic><topic>Metric space</topic><topic>Middle Aged</topic><topic>Multimodal Imaging - methods</topic><topic>Neuroradiology</topic><topic>Oropharyngeal Neoplasms - diagnostic imaging</topic><topic>Oropharyngolaryngeal carcinoma</topic><topic>Patients</topic><topic>Radiation therapy</topic><topic>Radiology</topic><topic>Radiomics</topic><topic>Reproducibility of Results</topic><topic>Retrospective Studies</topic><topic>Segmentation</topic><topic>Simultaneous discrimination learning</topic><topic>Squamous cell carcinoma</topic><topic>Throat cancer</topic><topic>Tomography, X-Ray Computed - methods</topic><topic>Ultrasound</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Choi, Yangsean</creatorcontrib><creatorcontrib>Bang, Jooin</creatorcontrib><creatorcontrib>Kim, Sang-Yeon</creatorcontrib><creatorcontrib>Seo, Minkook</creatorcontrib><creatorcontrib>Jang, Jinhee</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Biotechnology Research Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>Nursing &amp; Allied Health Premium</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>MEDLINE - Academic</collection><jtitle>European radiology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Choi, Yangsean</au><au>Bang, Jooin</au><au>Kim, Sang-Yeon</au><au>Seo, Minkook</au><au>Jang, Jinhee</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep learning–based multimodal segmentation of oropharyngeal squamous cell carcinoma on CT and MRI using self-configuring nnU-Net</atitle><jtitle>European radiology</jtitle><stitle>Eur Radiol</stitle><addtitle>Eur Radiol</addtitle><date>2024-08</date><risdate>2024</risdate><volume>34</volume><issue>8</issue><spage>5389</spage><epage>5400</epage><pages>5389-5400</pages><issn>1432-1084</issn><issn>0938-7994</issn><eissn>1432-1084</eissn><abstract>Purpose To evaluate deep learning–based segmentation models for oropharyngeal squamous cell carcinoma (OPSCC) using CT and MRI with nnU-Net. Methods This single-center retrospective study included 91 patients with OPSCC. The patients were grouped into the development ( n  = 56), test 1 ( n  = 13), and test 2 ( n  = 22) cohorts. In the development cohort, OPSCC was manually segmented on CT, MR, and co-registered CT-MR images, which served as the ground truth. The multimodal and multichannel input images were then trained using a self-configuring nnU-Net. For evaluation metrics, dice similarity coefficient (DSC) and mean Hausdorff distance (HD) were calculated for test cohorts. Pearson’s correlation and Bland–Altman analyses were performed between ground truth and prediction volumes. Intraclass correlation coefficients (ICCs) of radiomic features were calculated for reproducibility assessment. Results All models achieved robust segmentation performances with DSC of 0.64 ± 0.33 (CT), 0.67 ± 0.27 (MR), and 0.65 ± 0.29 (CT-MR) in test cohort 1 and 0.57 ± 0.31 (CT), 0.77 ± 0.08 (MR), and 0.73 ± 0.18 (CT-MR) in test cohort 2. No significant differences were found in DSC among the models. HD of CT-MR (1.57 ± 1.06 mm) and MR models (1.36 ± 0.61 mm) were significantly lower than that of the CT model (3.48 ± 5.0 mm) ( p  = 0.037 and p  = 0.014, respectively). The correlation coefficients between the ground truth and prediction volumes for CT, MR, and CT-MR models were 0.88, 0.93, and 0.9, respectively. MR models demonstrated excellent mean ICCs of radiomic features (0.91–0.93). Conclusion The self-configuring nnU-Net demonstrated reliable and accurate segmentation of OPSCC on CT and MRI. The multimodal CT-MR model showed promising results for the simultaneous segmentation on CT and MRI. Clinical relevance statement Deep learning–based automatic detection and segmentation of oropharyngeal squamous cell carcinoma on pre-treatment CT and MRI would facilitate radiologic response assessment and radiotherapy planning. Key Points • The nnU-Net framework produced a reliable and accurate segmentation of OPSCC on CT and MRI. • MR and CT-MR models showed higher DSC and lower Hausdorff distance than the CT model. • Correlation coefficients between the ground truth and predicted segmentation volumes were high in all the three models.</abstract><cop>Berlin/Heidelberg</cop><pub>Springer Berlin Heidelberg</pub><pmid>38243135</pmid><doi>10.1007/s00330-024-10585-y</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0003-1674-7101</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1432-1084
ispartof European radiology, 2024-08, Vol.34 (8), p.5389-5400
issn 1432-1084
0938-7994
1432-1084
language eng
recordid cdi_proquest_miscellaneous_2929056741
source Springer Nature
subjects Adult
Aged
Cancer
Carcinoma, Squamous Cell - diagnostic imaging
Cell culture
Computed tomography
Correlation coefficient
Correlation coefficients
Deep Learning
Diagnostic Radiology
Female
Head & neck cancer
Head and Neck
Humans
Image Interpretation, Computer-Assisted - methods
Image processing
Image segmentation
Imaging
Internal Medicine
Interventional Radiology
Magnetic resonance imaging
Magnetic Resonance Imaging - methods
Male
Medical imaging
Medicine
Medicine & Public Health
Metric space
Middle Aged
Multimodal Imaging - methods
Neuroradiology
Oropharyngeal Neoplasms - diagnostic imaging
Oropharyngolaryngeal carcinoma
Patients
Radiation therapy
Radiology
Radiomics
Reproducibility of Results
Retrospective Studies
Segmentation
Simultaneous discrimination learning
Squamous cell carcinoma
Throat cancer
Tomography, X-Ray Computed - methods
Ultrasound
title Deep learning–based multimodal segmentation of oropharyngeal squamous cell carcinoma on CT and MRI using self-configuring nnU-Net
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T13%3A45%3A47IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20learning%E2%80%93based%20multimodal%20segmentation%20of%20oropharyngeal%20squamous%20cell%20carcinoma%20on%20CT%20and%20MRI%20using%20self-configuring%20nnU-Net&rft.jtitle=European%20radiology&rft.au=Choi,%20Yangsean&rft.date=2024-08&rft.volume=34&rft.issue=8&rft.spage=5389&rft.epage=5400&rft.pages=5389-5400&rft.issn=1432-1084&rft.eissn=1432-1084&rft_id=info:doi/10.1007/s00330-024-10585-y&rft_dat=%3Cproquest_cross%3E2929056741%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c375t-37788b2f4918fa2f0c04debbecea11bd1176b8bf741291b39cf48da90ff021db3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3081981763&rft_id=info:pmid/38243135&rfr_iscdi=true