Loading…

Computed tomography and guidelines‐based human–machine fusion model for predicting resectability of the pancreatic cancer

Background and Aim The study aims to develop a hybrid machine learning model for predicting resectability of the pancreatic cancer, which is based on computed tomography (CT) and National Comprehensive Cancer Network (NCCN) guidelines. Method We retrospectively studied 349 patients. One hundred seve...

Full description

Saved in:
Bibliographic Details
Published in:Journal of gastroenterology and hepatology 2024-02, Vol.39 (2), p.399-409
Main Authors: Yimamu, Adilijiang, Li, Jun, Zhang, Haojie, Liang, Lidu, Feng, Lei, Wang, Yi, Zhou, Chenjie, Li, Shulong, Gao, Yi
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c3131-78965683124d77b75b113ef8355c320c67c14565d22e15d1d7140d6777f475bc3
container_end_page 409
container_issue 2
container_start_page 399
container_title Journal of gastroenterology and hepatology
container_volume 39
creator Yimamu, Adilijiang
Li, Jun
Zhang, Haojie
Liang, Lidu
Feng, Lei
Wang, Yi
Zhou, Chenjie
Li, Shulong
Gao, Yi
description Background and Aim The study aims to develop a hybrid machine learning model for predicting resectability of the pancreatic cancer, which is based on computed tomography (CT) and National Comprehensive Cancer Network (NCCN) guidelines. Method We retrospectively studied 349 patients. One hundred seventy‐one cases from Center 1 and 92 cases from Center 2 were used as the primary training cohort, and 66 cases from Center 3 and 20 cases from Center 4 were used as the independent test dataset. Semi‐automatic module of ITK‐SNAP software was used to assist CT image segmentation to obtain three‐dimensional (3D) imaging region of interest (ROI). There were 788 handcrafted features extracted for 3D ROI using PyRadiomics. The optimal feature subset consists of three features screened by three feature selection methods as the input of the SVM to construct the conventional radiomics‐based predictive model (cRad). 3D ROI was used to unify the resolution by 3D spline interpolation method for constructing the 3D tumor imaging tensor. Using 3D tumor image tensor as input, 3D kernelled support tensor machine‐based predictive model (KSTM), and 3D ResNet‐based deep learning predictive model (ResNet) were constructed. Multi‐classifier fusion ML model is constructed by fusing cRad, KSTM, and ResNet using multi‐classifier fusion strategy. Two experts with more than 10 years of clinical experience were invited to reevaluate each patient based on their CECT following the NCCN guidelines to obtain resectable, unresectable, and borderline resectable diagnoses. The three results were converted into probability values of 0.25, 0.75, and 0.50, respectively, according to the traditional empirical method. Then it is used as an independent classifier and integrated with multi‐classifier fusion machine learning (ML) model to obtain the human–machine fusion ML model (HMfML). Results Multi‐classifier fusion ML model's area under receiver operating characteristic curve (AUC; 0.8610), predictive accuracy (ACC: 80.23%), sensitivity (SEN: 78.95%), and specificity (SPE: 80.60%) is better than cRad, KSTM, and ResNet‐based single‐classifier models and their two‐classifier fusion models. This means that three different models have mined complementary CECT feature expression from different perspectives and can be integrated through CFS‐ER, so that the fusion model has better performance. HMfML's AUC (0.8845), ACC (82.56%), SEN (84.21%), SPE (82.09%). This means that ML models might learn extra inform
doi_str_mv 10.1111/jgh.16401
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2889999331</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2889999331</sourcerecordid><originalsourceid>FETCH-LOGICAL-c3131-78965683124d77b75b113ef8355c320c67c14565d22e15d1d7140d6777f475bc3</originalsourceid><addsrcrecordid>eNp10cGK1DAYB_AgijuOHnwBCXjRQ3fzJU3THmVwd5UFL3oOafJ1mqFtatIicxD2EQTfcJ9ko7N6EPwIJJAf_wT-hLwEdg55Lg77_hyqksEjsoGyZAWosnpMNqwGWTQCmjPyLKUDY6xkSj4lZ0I1Mi--Id93YZzXBR1dwhj20cz9kZrJ0f3qHQ5-wnR3-6M1KYt-Hc10d_tzNLbPF7Rbkw8THUOGtAuRzhGdt4uf9jRiQruY1g9-OdLQ0aVHOpvJRjSLt9TmI8bn5ElnhoQvHvYt-XL5_vPuurj5dPVh9-6msAIEFKpuKlnVAnjplGqVbAEEdrWQ0grObKUslLKSjnME6cApKJmrlFJdmbEVW_LmlDvH8HXFtOjRJ4vDYCYMa9K8rps8Ir-2Ja__oYewxin_TvOGy5oBFyqrtydlY0gpYqfn6EcTjxqY_tWJzp3o351k--ohcW1HdH_lnxIyuDiBb37A4_-T9Mer61PkPfwWl7w</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2925801237</pqid></control><display><type>article</type><title>Computed tomography and guidelines‐based human–machine fusion model for predicting resectability of the pancreatic cancer</title><source>Wiley</source><creator>Yimamu, Adilijiang ; Li, Jun ; Zhang, Haojie ; Liang, Lidu ; Feng, Lei ; Wang, Yi ; Zhou, Chenjie ; Li, Shulong ; Gao, Yi</creator><creatorcontrib>Yimamu, Adilijiang ; Li, Jun ; Zhang, Haojie ; Liang, Lidu ; Feng, Lei ; Wang, Yi ; Zhou, Chenjie ; Li, Shulong ; Gao, Yi</creatorcontrib><description>Background and Aim The study aims to develop a hybrid machine learning model for predicting resectability of the pancreatic cancer, which is based on computed tomography (CT) and National Comprehensive Cancer Network (NCCN) guidelines. Method We retrospectively studied 349 patients. One hundred seventy‐one cases from Center 1 and 92 cases from Center 2 were used as the primary training cohort, and 66 cases from Center 3 and 20 cases from Center 4 were used as the independent test dataset. Semi‐automatic module of ITK‐SNAP software was used to assist CT image segmentation to obtain three‐dimensional (3D) imaging region of interest (ROI). There were 788 handcrafted features extracted for 3D ROI using PyRadiomics. The optimal feature subset consists of three features screened by three feature selection methods as the input of the SVM to construct the conventional radiomics‐based predictive model (cRad). 3D ROI was used to unify the resolution by 3D spline interpolation method for constructing the 3D tumor imaging tensor. Using 3D tumor image tensor as input, 3D kernelled support tensor machine‐based predictive model (KSTM), and 3D ResNet‐based deep learning predictive model (ResNet) were constructed. Multi‐classifier fusion ML model is constructed by fusing cRad, KSTM, and ResNet using multi‐classifier fusion strategy. Two experts with more than 10 years of clinical experience were invited to reevaluate each patient based on their CECT following the NCCN guidelines to obtain resectable, unresectable, and borderline resectable diagnoses. The three results were converted into probability values of 0.25, 0.75, and 0.50, respectively, according to the traditional empirical method. Then it is used as an independent classifier and integrated with multi‐classifier fusion machine learning (ML) model to obtain the human–machine fusion ML model (HMfML). Results Multi‐classifier fusion ML model's area under receiver operating characteristic curve (AUC; 0.8610), predictive accuracy (ACC: 80.23%), sensitivity (SEN: 78.95%), and specificity (SPE: 80.60%) is better than cRad, KSTM, and ResNet‐based single‐classifier models and their two‐classifier fusion models. This means that three different models have mined complementary CECT feature expression from different perspectives and can be integrated through CFS‐ER, so that the fusion model has better performance. HMfML's AUC (0.8845), ACC (82.56%), SEN (84.21%), SPE (82.09%). This means that ML models might learn extra information from CECT that experts cannot distinguish, thus complementing expert experience and improving the performance of hybrid ML models. Conclusion HMfML can predict PC resectability with high accuracy.</description><identifier>ISSN: 0815-9319</identifier><identifier>EISSN: 1440-1746</identifier><identifier>DOI: 10.1111/jgh.16401</identifier><identifier>PMID: 37957952</identifier><language>eng</language><publisher>Australia: Wiley Subscription Services, Inc</publisher><subject>CECT ; CFS‐ER ; Computed tomography ; Deep learning ; HMfML ; Image processing ; Itk protein ; Learning algorithms ; Machine learning ; Pancreatic cancer ; PDAC ; Prediction models ; Radiomics ; Tomography ; Tumors</subject><ispartof>Journal of gastroenterology and hepatology, 2024-02, Vol.39 (2), p.399-409</ispartof><rights>2023 Journal of Gastroenterology and Hepatology Foundation and John Wiley &amp; Sons Australia, Ltd.</rights><rights>2024 Journal of Gastroenterology and Hepatology Foundation and John Wiley &amp; Sons Australia, Ltd</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c3131-78965683124d77b75b113ef8355c320c67c14565d22e15d1d7140d6777f475bc3</cites><orcidid>0009-0002-2650-7830 ; 0000-0003-3525-0133</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/37957952$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Yimamu, Adilijiang</creatorcontrib><creatorcontrib>Li, Jun</creatorcontrib><creatorcontrib>Zhang, Haojie</creatorcontrib><creatorcontrib>Liang, Lidu</creatorcontrib><creatorcontrib>Feng, Lei</creatorcontrib><creatorcontrib>Wang, Yi</creatorcontrib><creatorcontrib>Zhou, Chenjie</creatorcontrib><creatorcontrib>Li, Shulong</creatorcontrib><creatorcontrib>Gao, Yi</creatorcontrib><title>Computed tomography and guidelines‐based human–machine fusion model for predicting resectability of the pancreatic cancer</title><title>Journal of gastroenterology and hepatology</title><addtitle>J Gastroenterol Hepatol</addtitle><description>Background and Aim The study aims to develop a hybrid machine learning model for predicting resectability of the pancreatic cancer, which is based on computed tomography (CT) and National Comprehensive Cancer Network (NCCN) guidelines. Method We retrospectively studied 349 patients. One hundred seventy‐one cases from Center 1 and 92 cases from Center 2 were used as the primary training cohort, and 66 cases from Center 3 and 20 cases from Center 4 were used as the independent test dataset. Semi‐automatic module of ITK‐SNAP software was used to assist CT image segmentation to obtain three‐dimensional (3D) imaging region of interest (ROI). There were 788 handcrafted features extracted for 3D ROI using PyRadiomics. The optimal feature subset consists of three features screened by three feature selection methods as the input of the SVM to construct the conventional radiomics‐based predictive model (cRad). 3D ROI was used to unify the resolution by 3D spline interpolation method for constructing the 3D tumor imaging tensor. Using 3D tumor image tensor as input, 3D kernelled support tensor machine‐based predictive model (KSTM), and 3D ResNet‐based deep learning predictive model (ResNet) were constructed. Multi‐classifier fusion ML model is constructed by fusing cRad, KSTM, and ResNet using multi‐classifier fusion strategy. Two experts with more than 10 years of clinical experience were invited to reevaluate each patient based on their CECT following the NCCN guidelines to obtain resectable, unresectable, and borderline resectable diagnoses. The three results were converted into probability values of 0.25, 0.75, and 0.50, respectively, according to the traditional empirical method. Then it is used as an independent classifier and integrated with multi‐classifier fusion machine learning (ML) model to obtain the human–machine fusion ML model (HMfML). Results Multi‐classifier fusion ML model's area under receiver operating characteristic curve (AUC; 0.8610), predictive accuracy (ACC: 80.23%), sensitivity (SEN: 78.95%), and specificity (SPE: 80.60%) is better than cRad, KSTM, and ResNet‐based single‐classifier models and their two‐classifier fusion models. This means that three different models have mined complementary CECT feature expression from different perspectives and can be integrated through CFS‐ER, so that the fusion model has better performance. HMfML's AUC (0.8845), ACC (82.56%), SEN (84.21%), SPE (82.09%). This means that ML models might learn extra information from CECT that experts cannot distinguish, thus complementing expert experience and improving the performance of hybrid ML models. Conclusion HMfML can predict PC resectability with high accuracy.</description><subject>CECT</subject><subject>CFS‐ER</subject><subject>Computed tomography</subject><subject>Deep learning</subject><subject>HMfML</subject><subject>Image processing</subject><subject>Itk protein</subject><subject>Learning algorithms</subject><subject>Machine learning</subject><subject>Pancreatic cancer</subject><subject>PDAC</subject><subject>Prediction models</subject><subject>Radiomics</subject><subject>Tomography</subject><subject>Tumors</subject><issn>0815-9319</issn><issn>1440-1746</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNp10cGK1DAYB_AgijuOHnwBCXjRQ3fzJU3THmVwd5UFL3oOafJ1mqFtatIicxD2EQTfcJ9ko7N6EPwIJJAf_wT-hLwEdg55Lg77_hyqksEjsoGyZAWosnpMNqwGWTQCmjPyLKUDY6xkSj4lZ0I1Mi--Id93YZzXBR1dwhj20cz9kZrJ0f3qHQ5-wnR3-6M1KYt-Hc10d_tzNLbPF7Rbkw8THUOGtAuRzhGdt4uf9jRiQruY1g9-OdLQ0aVHOpvJRjSLt9TmI8bn5ElnhoQvHvYt-XL5_vPuurj5dPVh9-6msAIEFKpuKlnVAnjplGqVbAEEdrWQ0grObKUslLKSjnME6cApKJmrlFJdmbEVW_LmlDvH8HXFtOjRJ4vDYCYMa9K8rps8Ir-2Ja__oYewxin_TvOGy5oBFyqrtydlY0gpYqfn6EcTjxqY_tWJzp3o351k--ohcW1HdH_lnxIyuDiBb37A4_-T9Mer61PkPfwWl7w</recordid><startdate>202402</startdate><enddate>202402</enddate><creator>Yimamu, Adilijiang</creator><creator>Li, Jun</creator><creator>Zhang, Haojie</creator><creator>Liang, Lidu</creator><creator>Feng, Lei</creator><creator>Wang, Yi</creator><creator>Zhou, Chenjie</creator><creator>Li, Shulong</creator><creator>Gao, Yi</creator><general>Wiley Subscription Services, Inc</general><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7T5</scope><scope>7U9</scope><scope>H94</scope><scope>7X8</scope><orcidid>https://orcid.org/0009-0002-2650-7830</orcidid><orcidid>https://orcid.org/0000-0003-3525-0133</orcidid></search><sort><creationdate>202402</creationdate><title>Computed tomography and guidelines‐based human–machine fusion model for predicting resectability of the pancreatic cancer</title><author>Yimamu, Adilijiang ; Li, Jun ; Zhang, Haojie ; Liang, Lidu ; Feng, Lei ; Wang, Yi ; Zhou, Chenjie ; Li, Shulong ; Gao, Yi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c3131-78965683124d77b75b113ef8355c320c67c14565d22e15d1d7140d6777f475bc3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>CECT</topic><topic>CFS‐ER</topic><topic>Computed tomography</topic><topic>Deep learning</topic><topic>HMfML</topic><topic>Image processing</topic><topic>Itk protein</topic><topic>Learning algorithms</topic><topic>Machine learning</topic><topic>Pancreatic cancer</topic><topic>PDAC</topic><topic>Prediction models</topic><topic>Radiomics</topic><topic>Tomography</topic><topic>Tumors</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yimamu, Adilijiang</creatorcontrib><creatorcontrib>Li, Jun</creatorcontrib><creatorcontrib>Zhang, Haojie</creatorcontrib><creatorcontrib>Liang, Lidu</creatorcontrib><creatorcontrib>Feng, Lei</creatorcontrib><creatorcontrib>Wang, Yi</creatorcontrib><creatorcontrib>Zhou, Chenjie</creatorcontrib><creatorcontrib>Li, Shulong</creatorcontrib><creatorcontrib>Gao, Yi</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>Immunology Abstracts</collection><collection>Virology and AIDS Abstracts</collection><collection>AIDS and Cancer Research Abstracts</collection><collection>MEDLINE - Academic</collection><jtitle>Journal of gastroenterology and hepatology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yimamu, Adilijiang</au><au>Li, Jun</au><au>Zhang, Haojie</au><au>Liang, Lidu</au><au>Feng, Lei</au><au>Wang, Yi</au><au>Zhou, Chenjie</au><au>Li, Shulong</au><au>Gao, Yi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Computed tomography and guidelines‐based human–machine fusion model for predicting resectability of the pancreatic cancer</atitle><jtitle>Journal of gastroenterology and hepatology</jtitle><addtitle>J Gastroenterol Hepatol</addtitle><date>2024-02</date><risdate>2024</risdate><volume>39</volume><issue>2</issue><spage>399</spage><epage>409</epage><pages>399-409</pages><issn>0815-9319</issn><eissn>1440-1746</eissn><abstract>Background and Aim The study aims to develop a hybrid machine learning model for predicting resectability of the pancreatic cancer, which is based on computed tomography (CT) and National Comprehensive Cancer Network (NCCN) guidelines. Method We retrospectively studied 349 patients. One hundred seventy‐one cases from Center 1 and 92 cases from Center 2 were used as the primary training cohort, and 66 cases from Center 3 and 20 cases from Center 4 were used as the independent test dataset. Semi‐automatic module of ITK‐SNAP software was used to assist CT image segmentation to obtain three‐dimensional (3D) imaging region of interest (ROI). There were 788 handcrafted features extracted for 3D ROI using PyRadiomics. The optimal feature subset consists of three features screened by three feature selection methods as the input of the SVM to construct the conventional radiomics‐based predictive model (cRad). 3D ROI was used to unify the resolution by 3D spline interpolation method for constructing the 3D tumor imaging tensor. Using 3D tumor image tensor as input, 3D kernelled support tensor machine‐based predictive model (KSTM), and 3D ResNet‐based deep learning predictive model (ResNet) were constructed. Multi‐classifier fusion ML model is constructed by fusing cRad, KSTM, and ResNet using multi‐classifier fusion strategy. Two experts with more than 10 years of clinical experience were invited to reevaluate each patient based on their CECT following the NCCN guidelines to obtain resectable, unresectable, and borderline resectable diagnoses. The three results were converted into probability values of 0.25, 0.75, and 0.50, respectively, according to the traditional empirical method. Then it is used as an independent classifier and integrated with multi‐classifier fusion machine learning (ML) model to obtain the human–machine fusion ML model (HMfML). Results Multi‐classifier fusion ML model's area under receiver operating characteristic curve (AUC; 0.8610), predictive accuracy (ACC: 80.23%), sensitivity (SEN: 78.95%), and specificity (SPE: 80.60%) is better than cRad, KSTM, and ResNet‐based single‐classifier models and their two‐classifier fusion models. This means that three different models have mined complementary CECT feature expression from different perspectives and can be integrated through CFS‐ER, so that the fusion model has better performance. HMfML's AUC (0.8845), ACC (82.56%), SEN (84.21%), SPE (82.09%). This means that ML models might learn extra information from CECT that experts cannot distinguish, thus complementing expert experience and improving the performance of hybrid ML models. Conclusion HMfML can predict PC resectability with high accuracy.</abstract><cop>Australia</cop><pub>Wiley Subscription Services, Inc</pub><pmid>37957952</pmid><doi>10.1111/jgh.16401</doi><tpages>11</tpages><orcidid>https://orcid.org/0009-0002-2650-7830</orcidid><orcidid>https://orcid.org/0000-0003-3525-0133</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0815-9319
ispartof Journal of gastroenterology and hepatology, 2024-02, Vol.39 (2), p.399-409
issn 0815-9319
1440-1746
language eng
recordid cdi_proquest_miscellaneous_2889999331
source Wiley
subjects CECT
CFS‐ER
Computed tomography
Deep learning
HMfML
Image processing
Itk protein
Learning algorithms
Machine learning
Pancreatic cancer
PDAC
Prediction models
Radiomics
Tomography
Tumors
title Computed tomography and guidelines‐based human–machine fusion model for predicting resectability of the pancreatic cancer
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-24T16%3A50%3A51IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Computed%20tomography%20and%20guidelines%E2%80%90based%20human%E2%80%93machine%20fusion%20model%20for%20predicting%20resectability%20of%20the%20pancreatic%20cancer&rft.jtitle=Journal%20of%20gastroenterology%20and%20hepatology&rft.au=Yimamu,%20Adilijiang&rft.date=2024-02&rft.volume=39&rft.issue=2&rft.spage=399&rft.epage=409&rft.pages=399-409&rft.issn=0815-9319&rft.eissn=1440-1746&rft_id=info:doi/10.1111/jgh.16401&rft_dat=%3Cproquest_cross%3E2889999331%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c3131-78965683124d77b75b113ef8355c320c67c14565d22e15d1d7140d6777f475bc3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2925801237&rft_id=info:pmid/37957952&rfr_iscdi=true