Loading…

Supervised Knowledge Makes Large Language Models Better In-context Learners

Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering. The recent progress in large-scale generative models has further expanded their use in real-world language applications. However, the critical challenge of improving the generalizability and factu...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2024-04
Main Authors: Yang, Linyi, Zhang, Shuibai, Yu, Zhuohao, Bao, Guangsheng, Wang, Yidong, Wang, Jindong, Xu, Ruochen, Ye, Wei, Xie, Xing, Chen, Weizhu, Zhang, Yue
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Yang, Linyi
Zhang, Shuibai
Yu, Zhuohao
Bao, Guangsheng
Wang, Yidong
Wang, Jindong
Xu, Ruochen
Ye, Wei
Xie, Xing
Chen, Weizhu
Zhang, Yue
description Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering. The recent progress in large-scale generative models has further expanded their use in real-world language applications. However, the critical challenge of improving the generalizability and factuality of LLMs in natural language understanding and question answering remains under-explored. While previous in-context learning research has focused on enhancing models to adhere to users' specific instructions and quality expectations, and to avoid undesired outputs, little to no work has explored the use of task-Specific fine-tuned Language Models (SLMs) to improve LLMs' in-context learning during the inference stage. Our primary contribution is the establishment of a simple yet effective framework that enhances the reliability of LLMs as it: 1) generalizes out-of-distribution data, 2) elucidates how LLMs benefit from discriminative models, and 3) minimizes hallucinations in generative tasks. Using our proposed plug-in method, enhanced versions of Llama 2 and ChatGPT surpass their original versions regarding generalizability and factuality. We offer a comprehensive suite of resources, including 16 curated datasets, prompts, model checkpoints, and LLM outputs across 9 distinct tasks. The code and data are released at: https://github.com/YangLinyi/Supervised-Knowledge-Makes-Large-Language-Models-Better-In-context-Learners. Our empirical analysis sheds light on the advantages of incorporating discriminative models into LLMs and highlights the potential of our methodology in fostering more reliable LLMs.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2906660701</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2906660701</sourcerecordid><originalsourceid>FETCH-proquest_journals_29066607013</originalsourceid><addsrcrecordid>eNqNisEKwjAQBYMgWLT_EPBcSBOb6lVRlNaT3kuwa7GWpO4m6udbwQ_w9IaZN2KRVCpNlgspJywmaoUQUucyy1TEilPoAZ83gpoX1r06qBvgR3MH4qXBgUtjm2C-0tXQEV-D94D8YJOLsx7enpdg0ALSjI2vpiOIfztl8932vNknPbpHAPJV6wLaIVVyJbTWIhep-u_1Aey8PNM</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2906660701</pqid></control><display><type>article</type><title>Supervised Knowledge Makes Large Language Models Better In-context Learners</title><source>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</source><creator>Yang, Linyi ; Zhang, Shuibai ; Yu, Zhuohao ; Bao, Guangsheng ; Wang, Yidong ; Wang, Jindong ; Xu, Ruochen ; Ye, Wei ; Xie, Xing ; Chen, Weizhu ; Zhang, Yue</creator><creatorcontrib>Yang, Linyi ; Zhang, Shuibai ; Yu, Zhuohao ; Bao, Guangsheng ; Wang, Yidong ; Wang, Jindong ; Xu, Ruochen ; Ye, Wei ; Xie, Xing ; Chen, Weizhu ; Zhang, Yue</creatorcontrib><description>Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering. The recent progress in large-scale generative models has further expanded their use in real-world language applications. However, the critical challenge of improving the generalizability and factuality of LLMs in natural language understanding and question answering remains under-explored. While previous in-context learning research has focused on enhancing models to adhere to users' specific instructions and quality expectations, and to avoid undesired outputs, little to no work has explored the use of task-Specific fine-tuned Language Models (SLMs) to improve LLMs' in-context learning during the inference stage. Our primary contribution is the establishment of a simple yet effective framework that enhances the reliability of LLMs as it: 1) generalizes out-of-distribution data, 2) elucidates how LLMs benefit from discriminative models, and 3) minimizes hallucinations in generative tasks. Using our proposed plug-in method, enhanced versions of Llama 2 and ChatGPT surpass their original versions regarding generalizability and factuality. We offer a comprehensive suite of resources, including 16 curated datasets, prompts, model checkpoints, and LLM outputs across 9 distinct tasks. The code and data are released at: https://github.com/YangLinyi/Supervised-Knowledge-Makes-Large-Language-Models-Better-In-context-Learners. Our empirical analysis sheds light on the advantages of incorporating discriminative models into LLMs and highlights the potential of our methodology in fostering more reliable LLMs.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Context ; Empirical analysis ; Language ; Large language models</subject><ispartof>arXiv.org, 2024-04</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2906660701?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>776,780,25731,36989,44566</link.rule.ids></links><search><creatorcontrib>Yang, Linyi</creatorcontrib><creatorcontrib>Zhang, Shuibai</creatorcontrib><creatorcontrib>Yu, Zhuohao</creatorcontrib><creatorcontrib>Bao, Guangsheng</creatorcontrib><creatorcontrib>Wang, Yidong</creatorcontrib><creatorcontrib>Wang, Jindong</creatorcontrib><creatorcontrib>Xu, Ruochen</creatorcontrib><creatorcontrib>Ye, Wei</creatorcontrib><creatorcontrib>Xie, Xing</creatorcontrib><creatorcontrib>Chen, Weizhu</creatorcontrib><creatorcontrib>Zhang, Yue</creatorcontrib><title>Supervised Knowledge Makes Large Language Models Better In-context Learners</title><title>arXiv.org</title><description>Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering. The recent progress in large-scale generative models has further expanded their use in real-world language applications. However, the critical challenge of improving the generalizability and factuality of LLMs in natural language understanding and question answering remains under-explored. While previous in-context learning research has focused on enhancing models to adhere to users' specific instructions and quality expectations, and to avoid undesired outputs, little to no work has explored the use of task-Specific fine-tuned Language Models (SLMs) to improve LLMs' in-context learning during the inference stage. Our primary contribution is the establishment of a simple yet effective framework that enhances the reliability of LLMs as it: 1) generalizes out-of-distribution data, 2) elucidates how LLMs benefit from discriminative models, and 3) minimizes hallucinations in generative tasks. Using our proposed plug-in method, enhanced versions of Llama 2 and ChatGPT surpass their original versions regarding generalizability and factuality. We offer a comprehensive suite of resources, including 16 curated datasets, prompts, model checkpoints, and LLM outputs across 9 distinct tasks. The code and data are released at: https://github.com/YangLinyi/Supervised-Knowledge-Makes-Large-Language-Models-Better-In-context-Learners. Our empirical analysis sheds light on the advantages of incorporating discriminative models into LLMs and highlights the potential of our methodology in fostering more reliable LLMs.</description><subject>Context</subject><subject>Empirical analysis</subject><subject>Language</subject><subject>Large language models</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNisEKwjAQBYMgWLT_EPBcSBOb6lVRlNaT3kuwa7GWpO4m6udbwQ_w9IaZN2KRVCpNlgspJywmaoUQUucyy1TEilPoAZ83gpoX1r06qBvgR3MH4qXBgUtjm2C-0tXQEV-D94D8YJOLsx7enpdg0ALSjI2vpiOIfztl8932vNknPbpHAPJV6wLaIVVyJbTWIhep-u_1Aey8PNM</recordid><startdate>20240411</startdate><enddate>20240411</enddate><creator>Yang, Linyi</creator><creator>Zhang, Shuibai</creator><creator>Yu, Zhuohao</creator><creator>Bao, Guangsheng</creator><creator>Wang, Yidong</creator><creator>Wang, Jindong</creator><creator>Xu, Ruochen</creator><creator>Ye, Wei</creator><creator>Xie, Xing</creator><creator>Chen, Weizhu</creator><creator>Zhang, Yue</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240411</creationdate><title>Supervised Knowledge Makes Large Language Models Better In-context Learners</title><author>Yang, Linyi ; Zhang, Shuibai ; Yu, Zhuohao ; Bao, Guangsheng ; Wang, Yidong ; Wang, Jindong ; Xu, Ruochen ; Ye, Wei ; Xie, Xing ; Chen, Weizhu ; Zhang, Yue</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_29066607013</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Context</topic><topic>Empirical analysis</topic><topic>Language</topic><topic>Large language models</topic><toplevel>online_resources</toplevel><creatorcontrib>Yang, Linyi</creatorcontrib><creatorcontrib>Zhang, Shuibai</creatorcontrib><creatorcontrib>Yu, Zhuohao</creatorcontrib><creatorcontrib>Bao, Guangsheng</creatorcontrib><creatorcontrib>Wang, Yidong</creatorcontrib><creatorcontrib>Wang, Jindong</creatorcontrib><creatorcontrib>Xu, Ruochen</creatorcontrib><creatorcontrib>Ye, Wei</creatorcontrib><creatorcontrib>Xie, Xing</creatorcontrib><creatorcontrib>Chen, Weizhu</creatorcontrib><creatorcontrib>Zhang, Yue</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>ProQuest Engineering Database</collection><collection>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yang, Linyi</au><au>Zhang, Shuibai</au><au>Yu, Zhuohao</au><au>Bao, Guangsheng</au><au>Wang, Yidong</au><au>Wang, Jindong</au><au>Xu, Ruochen</au><au>Ye, Wei</au><au>Xie, Xing</au><au>Chen, Weizhu</au><au>Zhang, Yue</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Supervised Knowledge Makes Large Language Models Better In-context Learners</atitle><jtitle>arXiv.org</jtitle><date>2024-04-11</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering. The recent progress in large-scale generative models has further expanded their use in real-world language applications. However, the critical challenge of improving the generalizability and factuality of LLMs in natural language understanding and question answering remains under-explored. While previous in-context learning research has focused on enhancing models to adhere to users' specific instructions and quality expectations, and to avoid undesired outputs, little to no work has explored the use of task-Specific fine-tuned Language Models (SLMs) to improve LLMs' in-context learning during the inference stage. Our primary contribution is the establishment of a simple yet effective framework that enhances the reliability of LLMs as it: 1) generalizes out-of-distribution data, 2) elucidates how LLMs benefit from discriminative models, and 3) minimizes hallucinations in generative tasks. Using our proposed plug-in method, enhanced versions of Llama 2 and ChatGPT surpass their original versions regarding generalizability and factuality. We offer a comprehensive suite of resources, including 16 curated datasets, prompts, model checkpoints, and LLM outputs across 9 distinct tasks. The code and data are released at: https://github.com/YangLinyi/Supervised-Knowledge-Makes-Large-Language-Models-Better-In-context-Learners. Our empirical analysis sheds light on the advantages of incorporating discriminative models into LLMs and highlights the potential of our methodology in fostering more reliable LLMs.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-04
issn 2331-8422
language eng
recordid cdi_proquest_journals_2906660701
source Publicly Available Content Database (Proquest) (PQ_SDU_P3)
subjects Context
Empirical analysis
Language
Large language models
title Supervised Knowledge Makes Large Language Models Better In-context Learners
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-01T13%3A59%3A23IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Supervised%20Knowledge%20Makes%20Large%20Language%20Models%20Better%20In-context%20Learners&rft.jtitle=arXiv.org&rft.au=Yang,%20Linyi&rft.date=2024-04-11&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2906660701%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_29066607013%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2906660701&rft_id=info:pmid/&rfr_iscdi=true