Loading…
A Unified Framework on Generalizability of Clinical Prediction Models
To be useful, clinical prediction models (CPMs) must be generalizable to patients in new settings. Evaluating generalizability of CPMs helps identify spurious relationships in data, provides insights on when they fail, and thus, improves the explainability of the CPMs. There are discontinuities in c...
Saved in:
Published in: | Frontiers in artificial intelligence 2022-04, Vol.5, p.872720-872720 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c462t-a9f64c79bc26efaae992c3df0cf5540ef11f3ff2bd987b9d76c0bd14176de8e63 |
---|---|
cites | cdi_FETCH-LOGICAL-c462t-a9f64c79bc26efaae992c3df0cf5540ef11f3ff2bd987b9d76c0bd14176de8e63 |
container_end_page | 872720 |
container_issue | |
container_start_page | 872720 |
container_title | Frontiers in artificial intelligence |
container_volume | 5 |
creator | Wan, Bohua Caffo, Brian Vedula, S Swaroop |
description | To be useful, clinical prediction models (CPMs) must be generalizable to patients in new settings. Evaluating generalizability of CPMs helps identify spurious relationships in data, provides insights on when they fail, and thus, improves the explainability of the CPMs. There are discontinuities in concepts related to generalizability of CPMs in the clinical research and machine learning domains. Specifically, conventional statistical reasons to explain poor generalizability such as inadequate model development for the purposes of generalizability, differences in coding of predictors and outcome between development and external datasets, measurement error, inability to measure some predictors, and missing data, all have differing and often complementary treatments, in the two domains. Much of the current machine learning literature on generalizability of CPMs is in terms of dataset shift of which several types have been described. However, little research exists to synthesize concepts in the two domains. Bridging this conceptual discontinuity in the context of CPMs can facilitate systematic development of CPMs and evaluation of their sensitivity to factors that affect generalizability. We survey generalizability and dataset shift in CPMs from both the clinical research and machine learning perspectives, and describe a unifying framework to analyze generalizability of CPMs and to explain their sensitivity to factors affecting it. Our framework leads to a set of signaling statements that can be used to characterize differences between datasets in terms of factors that affect generalizability of the CPMs. |
doi_str_mv | 10.3389/frai.2022.872720 |
format | article |
fullrecord | <record><control><sourceid>proquest_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_32893679e84e4cfea4d1410b1db46ef6</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><doaj_id>oai_doaj_org_article_32893679e84e4cfea4d1410b1db46ef6</doaj_id><sourcerecordid>2665106194</sourcerecordid><originalsourceid>FETCH-LOGICAL-c462t-a9f64c79bc26efaae992c3df0cf5540ef11f3ff2bd987b9d76c0bd14176de8e63</originalsourceid><addsrcrecordid>eNpVkcFPFTEQhxujEYLcPZk9ennP6bTb3V5MyAsgCQQPcm667RSLfVts92nwr3fXhwROnbS_-aaTj7H3HNZC9PpTKDauERDXfYcdwit2iArlqkeOr5_VB-y41jsAwBZazvEtOxBt2wkN8pCdnjQ3YwyRfHNW7JZ-5_KjyWNzTiMVm-IfO8QUp4cmh2aT4hidTc3XQj66Kc65q-wp1XfsTbCp0vHjecRuzk6_bb6sLq_PLzYnlysnFU4rq4OSrtODQ0XBWtIanfABXGhbCRQ4DyIEHLzuu0H7TjkYPJe8U556UuKIXey5Pts7c1_i1pYHk200_y5yuTW2TNElMgJ7LVSnqZckXSArFxAM3A9yHr6wPu9Z97thS97ROM0Lv4C-fBnjd3ObfxnNAZTGGfDxEVDyzx3VyWxjdZSSHSnvqkGlWg6KazlHYR91JddaKDyN4WAWmWaRaRaZZi9zbvnw_HtPDf_Vib85hpxB</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2665106194</pqid></control><display><type>article</type><title>A Unified Framework on Generalizability of Clinical Prediction Models</title><source>PubMed Central</source><creator>Wan, Bohua ; Caffo, Brian ; Vedula, S Swaroop</creator><creatorcontrib>Wan, Bohua ; Caffo, Brian ; Vedula, S Swaroop</creatorcontrib><description>To be useful, clinical prediction models (CPMs) must be generalizable to patients in new settings. Evaluating generalizability of CPMs helps identify spurious relationships in data, provides insights on when they fail, and thus, improves the explainability of the CPMs. There are discontinuities in concepts related to generalizability of CPMs in the clinical research and machine learning domains. Specifically, conventional statistical reasons to explain poor generalizability such as inadequate model development for the purposes of generalizability, differences in coding of predictors and outcome between development and external datasets, measurement error, inability to measure some predictors, and missing data, all have differing and often complementary treatments, in the two domains. Much of the current machine learning literature on generalizability of CPMs is in terms of dataset shift of which several types have been described. However, little research exists to synthesize concepts in the two domains. Bridging this conceptual discontinuity in the context of CPMs can facilitate systematic development of CPMs and evaluation of their sensitivity to factors that affect generalizability. We survey generalizability and dataset shift in CPMs from both the clinical research and machine learning perspectives, and describe a unifying framework to analyze generalizability of CPMs and to explain their sensitivity to factors affecting it. Our framework leads to a set of signaling statements that can be used to characterize differences between datasets in terms of factors that affect generalizability of the CPMs.</description><identifier>ISSN: 2624-8212</identifier><identifier>EISSN: 2624-8212</identifier><identifier>DOI: 10.3389/frai.2022.872720</identifier><identifier>PMID: 35573904</identifier><language>eng</language><publisher>Switzerland: Frontiers Media S.A</publisher><subject>Artificial Intelligence ; clinical prediction models ; diagnosis ; explainability ; external validity ; generalizability ; prognosis</subject><ispartof>Frontiers in artificial intelligence, 2022-04, Vol.5, p.872720-872720</ispartof><rights>Copyright © 2022 Wan, Caffo and Vedula.</rights><rights>Copyright © 2022 Wan, Caffo and Vedula. 2022 Wan, Caffo and Vedula</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c462t-a9f64c79bc26efaae992c3df0cf5540ef11f3ff2bd987b9d76c0bd14176de8e63</citedby><cites>FETCH-LOGICAL-c462t-a9f64c79bc26efaae992c3df0cf5540ef11f3ff2bd987b9d76c0bd14176de8e63</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC9100692/pdf/$$EPDF$$P50$$Gpubmedcentral$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC9100692/$$EHTML$$P50$$Gpubmedcentral$$Hfree_for_read</linktohtml><link.rule.ids>230,314,723,776,780,881,27901,27902,53766,53768</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/35573904$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Wan, Bohua</creatorcontrib><creatorcontrib>Caffo, Brian</creatorcontrib><creatorcontrib>Vedula, S Swaroop</creatorcontrib><title>A Unified Framework on Generalizability of Clinical Prediction Models</title><title>Frontiers in artificial intelligence</title><addtitle>Front Artif Intell</addtitle><description>To be useful, clinical prediction models (CPMs) must be generalizable to patients in new settings. Evaluating generalizability of CPMs helps identify spurious relationships in data, provides insights on when they fail, and thus, improves the explainability of the CPMs. There are discontinuities in concepts related to generalizability of CPMs in the clinical research and machine learning domains. Specifically, conventional statistical reasons to explain poor generalizability such as inadequate model development for the purposes of generalizability, differences in coding of predictors and outcome between development and external datasets, measurement error, inability to measure some predictors, and missing data, all have differing and often complementary treatments, in the two domains. Much of the current machine learning literature on generalizability of CPMs is in terms of dataset shift of which several types have been described. However, little research exists to synthesize concepts in the two domains. Bridging this conceptual discontinuity in the context of CPMs can facilitate systematic development of CPMs and evaluation of their sensitivity to factors that affect generalizability. We survey generalizability and dataset shift in CPMs from both the clinical research and machine learning perspectives, and describe a unifying framework to analyze generalizability of CPMs and to explain their sensitivity to factors affecting it. Our framework leads to a set of signaling statements that can be used to characterize differences between datasets in terms of factors that affect generalizability of the CPMs.</description><subject>Artificial Intelligence</subject><subject>clinical prediction models</subject><subject>diagnosis</subject><subject>explainability</subject><subject>external validity</subject><subject>generalizability</subject><subject>prognosis</subject><issn>2624-8212</issn><issn>2624-8212</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>DOA</sourceid><recordid>eNpVkcFPFTEQhxujEYLcPZk9ennP6bTb3V5MyAsgCQQPcm667RSLfVts92nwr3fXhwROnbS_-aaTj7H3HNZC9PpTKDauERDXfYcdwit2iArlqkeOr5_VB-y41jsAwBZazvEtOxBt2wkN8pCdnjQ3YwyRfHNW7JZ-5_KjyWNzTiMVm-IfO8QUp4cmh2aT4hidTc3XQj66Kc65q-wp1XfsTbCp0vHjecRuzk6_bb6sLq_PLzYnlysnFU4rq4OSrtODQ0XBWtIanfABXGhbCRQ4DyIEHLzuu0H7TjkYPJe8U556UuKIXey5Pts7c1_i1pYHk200_y5yuTW2TNElMgJ7LVSnqZckXSArFxAM3A9yHr6wPu9Z97thS97ROM0Lv4C-fBnjd3ObfxnNAZTGGfDxEVDyzx3VyWxjdZSSHSnvqkGlWg6KazlHYR91JddaKDyN4WAWmWaRaRaZZi9zbvnw_HtPDf_Vib85hpxB</recordid><startdate>20220429</startdate><enddate>20220429</enddate><creator>Wan, Bohua</creator><creator>Caffo, Brian</creator><creator>Vedula, S Swaroop</creator><general>Frontiers Media S.A</general><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope></search><sort><creationdate>20220429</creationdate><title>A Unified Framework on Generalizability of Clinical Prediction Models</title><author>Wan, Bohua ; Caffo, Brian ; Vedula, S Swaroop</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c462t-a9f64c79bc26efaae992c3df0cf5540ef11f3ff2bd987b9d76c0bd14176de8e63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Artificial Intelligence</topic><topic>clinical prediction models</topic><topic>diagnosis</topic><topic>explainability</topic><topic>external validity</topic><topic>generalizability</topic><topic>prognosis</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wan, Bohua</creatorcontrib><creatorcontrib>Caffo, Brian</creatorcontrib><creatorcontrib>Vedula, S Swaroop</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>Frontiers in artificial intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wan, Bohua</au><au>Caffo, Brian</au><au>Vedula, S Swaroop</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Unified Framework on Generalizability of Clinical Prediction Models</atitle><jtitle>Frontiers in artificial intelligence</jtitle><addtitle>Front Artif Intell</addtitle><date>2022-04-29</date><risdate>2022</risdate><volume>5</volume><spage>872720</spage><epage>872720</epage><pages>872720-872720</pages><issn>2624-8212</issn><eissn>2624-8212</eissn><abstract>To be useful, clinical prediction models (CPMs) must be generalizable to patients in new settings. Evaluating generalizability of CPMs helps identify spurious relationships in data, provides insights on when they fail, and thus, improves the explainability of the CPMs. There are discontinuities in concepts related to generalizability of CPMs in the clinical research and machine learning domains. Specifically, conventional statistical reasons to explain poor generalizability such as inadequate model development for the purposes of generalizability, differences in coding of predictors and outcome between development and external datasets, measurement error, inability to measure some predictors, and missing data, all have differing and often complementary treatments, in the two domains. Much of the current machine learning literature on generalizability of CPMs is in terms of dataset shift of which several types have been described. However, little research exists to synthesize concepts in the two domains. Bridging this conceptual discontinuity in the context of CPMs can facilitate systematic development of CPMs and evaluation of their sensitivity to factors that affect generalizability. We survey generalizability and dataset shift in CPMs from both the clinical research and machine learning perspectives, and describe a unifying framework to analyze generalizability of CPMs and to explain their sensitivity to factors affecting it. Our framework leads to a set of signaling statements that can be used to characterize differences between datasets in terms of factors that affect generalizability of the CPMs.</abstract><cop>Switzerland</cop><pub>Frontiers Media S.A</pub><pmid>35573904</pmid><doi>10.3389/frai.2022.872720</doi><tpages>1</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2624-8212 |
ispartof | Frontiers in artificial intelligence, 2022-04, Vol.5, p.872720-872720 |
issn | 2624-8212 2624-8212 |
language | eng |
recordid | cdi_doaj_primary_oai_doaj_org_article_32893679e84e4cfea4d1410b1db46ef6 |
source | PubMed Central |
subjects | Artificial Intelligence clinical prediction models diagnosis explainability external validity generalizability prognosis |
title | A Unified Framework on Generalizability of Clinical Prediction Models |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-07T08%3A15%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Unified%20Framework%20on%20Generalizability%20of%20Clinical%20Prediction%20Models&rft.jtitle=Frontiers%20in%20artificial%20intelligence&rft.au=Wan,%20Bohua&rft.date=2022-04-29&rft.volume=5&rft.spage=872720&rft.epage=872720&rft.pages=872720-872720&rft.issn=2624-8212&rft.eissn=2624-8212&rft_id=info:doi/10.3389/frai.2022.872720&rft_dat=%3Cproquest_doaj_%3E2665106194%3C/proquest_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c462t-a9f64c79bc26efaae992c3df0cf5540ef11f3ff2bd987b9d76c0bd14176de8e63%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2665106194&rft_id=info:pmid/35573904&rfr_iscdi=true |