Loading…

Distilling the knowledge from large-language model for health event prediction

Health event prediction is empowered by the rapid and wide application of electronic health records (EHR). In the Intensive Care Unit (ICU), precisely predicting the health related events in advance is essential for providing treatment and intervention to improve the patients outcomes. EHR is a kind...

Full description

Saved in:
Bibliographic Details
Published in:Scientific reports 2024-12, Vol.14 (1), p.30675-11, Article 30675
Main Authors: Ding, Sirui, Ye, Jiancheng, Hu, Xia, Zou, Na
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c2932-d1eee888d14f6e5b148adcb73dc55d16624f5205b481b4b9c1c2cf3b6809dfaf3
container_end_page 11
container_issue 1
container_start_page 30675
container_title Scientific reports
container_volume 14
creator Ding, Sirui
Ye, Jiancheng
Hu, Xia
Zou, Na
description Health event prediction is empowered by the rapid and wide application of electronic health records (EHR). In the Intensive Care Unit (ICU), precisely predicting the health related events in advance is essential for providing treatment and intervention to improve the patients outcomes. EHR is a kind of multi-modal data containing clinical text, time series, structured data, etc. Most health event prediction works focus on a single modality, e.g., text or tabular EHR. How to effectively learn from the multi-modal EHR for health event prediction remains a challenge. Inspired by the strong capability in text processing of large language model (LLM), we propose the framework CKLE for health event prediction by distilling the knowledge from LLM and learning from multi-modal EHR. There are two challenges of applying LLM in the health event prediction, the first one is most LLM can only handle text data rather than other modalities, e.g., structured data. The second challenge is the privacy issue of health applications requires the LLM to be locally deployed, which may be limited by the computational resource. CKLE solves the challenges of LLM scalability and portability in the healthcare domain by distilling the cross-modality knowledge from LLM into the health event predictive model. To fully take advantage of the strong power of LLM, the raw clinical text is refined and augmented with prompt learning. The embedding of clinical text are generated by LLM. To effectively distill the knowledge of LLM into the predictive model, we design a cross-modality knowledge distillation (KD) method. A specially designed training objective will be used for the KD process with the consideration of multiple modality and patient similarity. The KD loss function consists of two parts. The first one is cross-modality contrastive loss function, which models the correlation of different modalities from the same patient. The second one is patient similarity learning loss function to model the correlations between similar patients. The cross-modality knowledge distillation can distill the rich information in clinical text and the knowledge of LLM into the predictive model on structured EHR data. To demonstrate the effectiveness of CKLE, we evaluate CKLE on two health event prediction tasks in the field of cardiology, heart failure prediction and hypertension prediction. We select the 7125 patients from MIMIC-III dataset and split them into train/validation/test sets. We can achieve a
doi_str_mv 10.1038/s41598-024-75331-2
format article
fullrecord <record><control><sourceid>proquest_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_ebe350c02f52406c86e1960422f14c6f</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><doaj_id>oai_doaj_org_article_ebe350c02f52406c86e1960422f14c6f</doaj_id><sourcerecordid>3149756908</sourcerecordid><originalsourceid>FETCH-LOGICAL-c2932-d1eee888d14f6e5b148adcb73dc55d16624f5205b481b4b9c1c2cf3b6809dfaf3</originalsourceid><addsrcrecordid>eNp9kU1v1DAQhiMEolXpH-CAInHhEvB37BNCLR-VKrjA2XLscdaLN17spIh_j3dTSssBX2zNvPN4Zt6meY7Ra4yofFMY5kp2iLCu55TijjxqTglivCOUkMf33ifNeSlbVA8nimH1tDmhqqeIKnTafL4MZQ4xhmls5w2036f0M4IbofU57dpo8ghdNNO4mBrbJQex9Sm3GzBx3rRwA9Pc7jO4YOeQpmfNE29igfPb-6z59uH914tP3fWXj1cX7647SxQlncMAIKV0mHkBfMBMGmeHnjrLucNCEOY5QXxgEg9sUBZbYj0dhETKeePpWXO1cl0yW73PYWfyL51M0MdAyqM2eQ42goYBKEcWkUpkSFgpACuBGCEeMysOrLcra78MO3C2TpRNfAB9mJnCRo_pRmNc--mVrIRXt4ScfixQZr0LxUKse4O0FE0xUz0XCh2kL_-RbtOSp7qro0pUi-RBRVaVzamUDP6uG4z0wX692q-r_fpovya16MX9Oe5K_phdBXQVlJqaRsh___4P9jdZRbrY</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3149652988</pqid></control><display><type>article</type><title>Distilling the knowledge from large-language model for health event prediction</title><source>Publicly Available Content Database</source><source>Full-Text Journals in Chemistry (Open access)</source><source>PubMed Central</source><source>Springer Nature - nature.com Journals - Fully Open Access</source><creator>Ding, Sirui ; Ye, Jiancheng ; Hu, Xia ; Zou, Na</creator><creatorcontrib>Ding, Sirui ; Ye, Jiancheng ; Hu, Xia ; Zou, Na</creatorcontrib><description>Health event prediction is empowered by the rapid and wide application of electronic health records (EHR). In the Intensive Care Unit (ICU), precisely predicting the health related events in advance is essential for providing treatment and intervention to improve the patients outcomes. EHR is a kind of multi-modal data containing clinical text, time series, structured data, etc. Most health event prediction works focus on a single modality, e.g., text or tabular EHR. How to effectively learn from the multi-modal EHR for health event prediction remains a challenge. Inspired by the strong capability in text processing of large language model (LLM), we propose the framework CKLE for health event prediction by distilling the knowledge from LLM and learning from multi-modal EHR. There are two challenges of applying LLM in the health event prediction, the first one is most LLM can only handle text data rather than other modalities, e.g., structured data. The second challenge is the privacy issue of health applications requires the LLM to be locally deployed, which may be limited by the computational resource. CKLE solves the challenges of LLM scalability and portability in the healthcare domain by distilling the cross-modality knowledge from LLM into the health event predictive model. To fully take advantage of the strong power of LLM, the raw clinical text is refined and augmented with prompt learning. The embedding of clinical text are generated by LLM. To effectively distill the knowledge of LLM into the predictive model, we design a cross-modality knowledge distillation (KD) method. A specially designed training objective will be used for the KD process with the consideration of multiple modality and patient similarity. The KD loss function consists of two parts. The first one is cross-modality contrastive loss function, which models the correlation of different modalities from the same patient. The second one is patient similarity learning loss function to model the correlations between similar patients. The cross-modality knowledge distillation can distill the rich information in clinical text and the knowledge of LLM into the predictive model on structured EHR data. To demonstrate the effectiveness of CKLE, we evaluate CKLE on two health event prediction tasks in the field of cardiology, heart failure prediction and hypertension prediction. We select the 7125 patients from MIMIC-III dataset and split them into train/validation/test sets. We can achieve a maximum 4.48% improvement in accuracy compared to state-of-the-art predictive model designed for health event prediction. The results demonstrate CKLE can surpass the baseline prediction models significantly on both normal and limited label settings. We also conduct the case study on cardiology disease analysis in the heart failure and hypertension prediction. Through the feature importance calculation, we analyse the salient features related to the cardiology disease which corresponds to the medical domain knowledge. The superior performance and interpretability of CKLE pave a promising way to leverage the power and knowledge of LLM in the health event prediction in real-world clinical settings.</description><identifier>ISSN: 2045-2322</identifier><identifier>EISSN: 2045-2322</identifier><identifier>DOI: 10.1038/s41598-024-75331-2</identifier><identifier>PMID: 39730390</identifier><language>eng</language><publisher>London: Nature Publishing Group UK</publisher><subject>631/114 ; 631/114/1305 ; 631/114/2164 ; Cardiology ; Cardiovascular disease ; Congestive heart failure ; Distillation ; Electronic Health Records ; Electronic medical records ; Embedding ; Health event prediction ; Heart diseases ; Heart failure ; Humanities and Social Sciences ; Humans ; Hypertension ; Intensive Care Units ; Knowledge ; Knowledge distillation ; Large language models ; Large-language model ; Learning ; Machine Learning ; Multi-modal learning ; multidisciplinary ; Natural Language Processing ; Patients ; Prediction models ; Science ; Science (multidisciplinary)</subject><ispartof>Scientific reports, 2024-12, Vol.14 (1), p.30675-11, Article 30675</ispartof><rights>The Author(s) 2024</rights><rights>2024. The Author(s).</rights><rights>Copyright Nature Publishing Group 2024</rights><rights>The Author(s) 2024 2024</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c2932-d1eee888d14f6e5b148adcb73dc55d16624f5205b481b4b9c1c2cf3b6809dfaf3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/3149652988/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/3149652988?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>230,314,727,780,784,885,25752,27923,27924,37011,37012,44589,53790,53792,74897</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/39730390$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Ding, Sirui</creatorcontrib><creatorcontrib>Ye, Jiancheng</creatorcontrib><creatorcontrib>Hu, Xia</creatorcontrib><creatorcontrib>Zou, Na</creatorcontrib><title>Distilling the knowledge from large-language model for health event prediction</title><title>Scientific reports</title><addtitle>Sci Rep</addtitle><addtitle>Sci Rep</addtitle><description>Health event prediction is empowered by the rapid and wide application of electronic health records (EHR). In the Intensive Care Unit (ICU), precisely predicting the health related events in advance is essential for providing treatment and intervention to improve the patients outcomes. EHR is a kind of multi-modal data containing clinical text, time series, structured data, etc. Most health event prediction works focus on a single modality, e.g., text or tabular EHR. How to effectively learn from the multi-modal EHR for health event prediction remains a challenge. Inspired by the strong capability in text processing of large language model (LLM), we propose the framework CKLE for health event prediction by distilling the knowledge from LLM and learning from multi-modal EHR. There are two challenges of applying LLM in the health event prediction, the first one is most LLM can only handle text data rather than other modalities, e.g., structured data. The second challenge is the privacy issue of health applications requires the LLM to be locally deployed, which may be limited by the computational resource. CKLE solves the challenges of LLM scalability and portability in the healthcare domain by distilling the cross-modality knowledge from LLM into the health event predictive model. To fully take advantage of the strong power of LLM, the raw clinical text is refined and augmented with prompt learning. The embedding of clinical text are generated by LLM. To effectively distill the knowledge of LLM into the predictive model, we design a cross-modality knowledge distillation (KD) method. A specially designed training objective will be used for the KD process with the consideration of multiple modality and patient similarity. The KD loss function consists of two parts. The first one is cross-modality contrastive loss function, which models the correlation of different modalities from the same patient. The second one is patient similarity learning loss function to model the correlations between similar patients. The cross-modality knowledge distillation can distill the rich information in clinical text and the knowledge of LLM into the predictive model on structured EHR data. To demonstrate the effectiveness of CKLE, we evaluate CKLE on two health event prediction tasks in the field of cardiology, heart failure prediction and hypertension prediction. We select the 7125 patients from MIMIC-III dataset and split them into train/validation/test sets. We can achieve a maximum 4.48% improvement in accuracy compared to state-of-the-art predictive model designed for health event prediction. The results demonstrate CKLE can surpass the baseline prediction models significantly on both normal and limited label settings. We also conduct the case study on cardiology disease analysis in the heart failure and hypertension prediction. Through the feature importance calculation, we analyse the salient features related to the cardiology disease which corresponds to the medical domain knowledge. The superior performance and interpretability of CKLE pave a promising way to leverage the power and knowledge of LLM in the health event prediction in real-world clinical settings.</description><subject>631/114</subject><subject>631/114/1305</subject><subject>631/114/2164</subject><subject>Cardiology</subject><subject>Cardiovascular disease</subject><subject>Congestive heart failure</subject><subject>Distillation</subject><subject>Electronic Health Records</subject><subject>Electronic medical records</subject><subject>Embedding</subject><subject>Health event prediction</subject><subject>Heart diseases</subject><subject>Heart failure</subject><subject>Humanities and Social Sciences</subject><subject>Humans</subject><subject>Hypertension</subject><subject>Intensive Care Units</subject><subject>Knowledge</subject><subject>Knowledge distillation</subject><subject>Large language models</subject><subject>Large-language model</subject><subject>Learning</subject><subject>Machine Learning</subject><subject>Multi-modal learning</subject><subject>multidisciplinary</subject><subject>Natural Language Processing</subject><subject>Patients</subject><subject>Prediction models</subject><subject>Science</subject><subject>Science (multidisciplinary)</subject><issn>2045-2322</issn><issn>2045-2322</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNp9kU1v1DAQhiMEolXpH-CAInHhEvB37BNCLR-VKrjA2XLscdaLN17spIh_j3dTSssBX2zNvPN4Zt6meY7Ra4yofFMY5kp2iLCu55TijjxqTglivCOUkMf33ifNeSlbVA8nimH1tDmhqqeIKnTafL4MZQ4xhmls5w2036f0M4IbofU57dpo8ghdNNO4mBrbJQex9Sm3GzBx3rRwA9Pc7jO4YOeQpmfNE29igfPb-6z59uH914tP3fWXj1cX7647SxQlncMAIKV0mHkBfMBMGmeHnjrLucNCEOY5QXxgEg9sUBZbYj0dhETKeePpWXO1cl0yW73PYWfyL51M0MdAyqM2eQ42goYBKEcWkUpkSFgpACuBGCEeMysOrLcra78MO3C2TpRNfAB9mJnCRo_pRmNc--mVrIRXt4ScfixQZr0LxUKse4O0FE0xUz0XCh2kL_-RbtOSp7qro0pUi-RBRVaVzamUDP6uG4z0wX692q-r_fpovya16MX9Oe5K_phdBXQVlJqaRsh___4P9jdZRbrY</recordid><startdate>20241228</startdate><enddate>20241228</enddate><creator>Ding, Sirui</creator><creator>Ye, Jiancheng</creator><creator>Hu, Xia</creator><creator>Zou, Na</creator><general>Nature Publishing Group UK</general><general>Nature Publishing Group</general><general>Nature Portfolio</general><scope>C6C</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7X7</scope><scope>7XB</scope><scope>88A</scope><scope>88E</scope><scope>88I</scope><scope>8FE</scope><scope>8FH</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AEUYN</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BENPR</scope><scope>BHPHI</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>K9.</scope><scope>LK8</scope><scope>M0S</scope><scope>M1P</scope><scope>M2P</scope><scope>M7P</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope></search><sort><creationdate>20241228</creationdate><title>Distilling the knowledge from large-language model for health event prediction</title><author>Ding, Sirui ; Ye, Jiancheng ; Hu, Xia ; Zou, Na</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c2932-d1eee888d14f6e5b148adcb73dc55d16624f5205b481b4b9c1c2cf3b6809dfaf3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>631/114</topic><topic>631/114/1305</topic><topic>631/114/2164</topic><topic>Cardiology</topic><topic>Cardiovascular disease</topic><topic>Congestive heart failure</topic><topic>Distillation</topic><topic>Electronic Health Records</topic><topic>Electronic medical records</topic><topic>Embedding</topic><topic>Health event prediction</topic><topic>Heart diseases</topic><topic>Heart failure</topic><topic>Humanities and Social Sciences</topic><topic>Humans</topic><topic>Hypertension</topic><topic>Intensive Care Units</topic><topic>Knowledge</topic><topic>Knowledge distillation</topic><topic>Large language models</topic><topic>Large-language model</topic><topic>Learning</topic><topic>Machine Learning</topic><topic>Multi-modal learning</topic><topic>multidisciplinary</topic><topic>Natural Language Processing</topic><topic>Patients</topic><topic>Prediction models</topic><topic>Science</topic><topic>Science (multidisciplinary)</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ding, Sirui</creatorcontrib><creatorcontrib>Ye, Jiancheng</creatorcontrib><creatorcontrib>Hu, Xia</creatorcontrib><creatorcontrib>Zou, Na</creatorcontrib><collection>SpringerOpen</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Health &amp; Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Biology Database (Alumni Edition)</collection><collection>Medical Database (Alumni Edition)</collection><collection>Science Database (Alumni Edition)</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest One Sustainability</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>ProQuest Central</collection><collection>Natural Science Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>ProQuest Biological Science Collection</collection><collection>Health &amp; Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>Science Database</collection><collection>Biological Science Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>Scientific reports</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ding, Sirui</au><au>Ye, Jiancheng</au><au>Hu, Xia</au><au>Zou, Na</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Distilling the knowledge from large-language model for health event prediction</atitle><jtitle>Scientific reports</jtitle><stitle>Sci Rep</stitle><addtitle>Sci Rep</addtitle><date>2024-12-28</date><risdate>2024</risdate><volume>14</volume><issue>1</issue><spage>30675</spage><epage>11</epage><pages>30675-11</pages><artnum>30675</artnum><issn>2045-2322</issn><eissn>2045-2322</eissn><abstract>Health event prediction is empowered by the rapid and wide application of electronic health records (EHR). In the Intensive Care Unit (ICU), precisely predicting the health related events in advance is essential for providing treatment and intervention to improve the patients outcomes. EHR is a kind of multi-modal data containing clinical text, time series, structured data, etc. Most health event prediction works focus on a single modality, e.g., text or tabular EHR. How to effectively learn from the multi-modal EHR for health event prediction remains a challenge. Inspired by the strong capability in text processing of large language model (LLM), we propose the framework CKLE for health event prediction by distilling the knowledge from LLM and learning from multi-modal EHR. There are two challenges of applying LLM in the health event prediction, the first one is most LLM can only handle text data rather than other modalities, e.g., structured data. The second challenge is the privacy issue of health applications requires the LLM to be locally deployed, which may be limited by the computational resource. CKLE solves the challenges of LLM scalability and portability in the healthcare domain by distilling the cross-modality knowledge from LLM into the health event predictive model. To fully take advantage of the strong power of LLM, the raw clinical text is refined and augmented with prompt learning. The embedding of clinical text are generated by LLM. To effectively distill the knowledge of LLM into the predictive model, we design a cross-modality knowledge distillation (KD) method. A specially designed training objective will be used for the KD process with the consideration of multiple modality and patient similarity. The KD loss function consists of two parts. The first one is cross-modality contrastive loss function, which models the correlation of different modalities from the same patient. The second one is patient similarity learning loss function to model the correlations between similar patients. The cross-modality knowledge distillation can distill the rich information in clinical text and the knowledge of LLM into the predictive model on structured EHR data. To demonstrate the effectiveness of CKLE, we evaluate CKLE on two health event prediction tasks in the field of cardiology, heart failure prediction and hypertension prediction. We select the 7125 patients from MIMIC-III dataset and split them into train/validation/test sets. We can achieve a maximum 4.48% improvement in accuracy compared to state-of-the-art predictive model designed for health event prediction. The results demonstrate CKLE can surpass the baseline prediction models significantly on both normal and limited label settings. We also conduct the case study on cardiology disease analysis in the heart failure and hypertension prediction. Through the feature importance calculation, we analyse the salient features related to the cardiology disease which corresponds to the medical domain knowledge. The superior performance and interpretability of CKLE pave a promising way to leverage the power and knowledge of LLM in the health event prediction in real-world clinical settings.</abstract><cop>London</cop><pub>Nature Publishing Group UK</pub><pmid>39730390</pmid><doi>10.1038/s41598-024-75331-2</doi><tpages>11</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2045-2322
ispartof Scientific reports, 2024-12, Vol.14 (1), p.30675-11, Article 30675
issn 2045-2322
2045-2322
language eng
recordid cdi_doaj_primary_oai_doaj_org_article_ebe350c02f52406c86e1960422f14c6f
source Publicly Available Content Database; Full-Text Journals in Chemistry (Open access); PubMed Central; Springer Nature - nature.com Journals - Fully Open Access
subjects 631/114
631/114/1305
631/114/2164
Cardiology
Cardiovascular disease
Congestive heart failure
Distillation
Electronic Health Records
Electronic medical records
Embedding
Health event prediction
Heart diseases
Heart failure
Humanities and Social Sciences
Humans
Hypertension
Intensive Care Units
Knowledge
Knowledge distillation
Large language models
Large-language model
Learning
Machine Learning
Multi-modal learning
multidisciplinary
Natural Language Processing
Patients
Prediction models
Science
Science (multidisciplinary)
title Distilling the knowledge from large-language model for health event prediction
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T23%3A48%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Distilling%20the%20knowledge%20from%20large-language%20model%20for%20health%20event%20prediction&rft.jtitle=Scientific%20reports&rft.au=Ding,%20Sirui&rft.date=2024-12-28&rft.volume=14&rft.issue=1&rft.spage=30675&rft.epage=11&rft.pages=30675-11&rft.artnum=30675&rft.issn=2045-2322&rft.eissn=2045-2322&rft_id=info:doi/10.1038/s41598-024-75331-2&rft_dat=%3Cproquest_doaj_%3E3149756908%3C/proquest_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c2932-d1eee888d14f6e5b148adcb73dc55d16624f5205b481b4b9c1c2cf3b6809dfaf3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3149652988&rft_id=info:pmid/39730390&rfr_iscdi=true