Loading…

Improving the Robustness of Large Language Models via Consistency Alignment

Large language models (LLMs) have shown tremendous success in following user instructions and generating helpful responses. Nevertheless, their robustness is still far from optimal, as they may generate significantly inconsistent responses due to minor changes in the verbalized instructions. Recent...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2024-03
Main Authors: Zhao, Yukun, Lingyong Yan, Sun, Weiwei, Xing, Guoliang, Wang, Shuaiqiang, Chong, Meng, Cheng, Zhicong, Ren, Zhaochun, Yin, Dawei
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Zhao, Yukun
Lingyong Yan
Sun, Weiwei
Xing, Guoliang
Wang, Shuaiqiang
Chong, Meng
Cheng, Zhicong
Ren, Zhaochun
Yin, Dawei
description Large language models (LLMs) have shown tremendous success in following user instructions and generating helpful responses. Nevertheless, their robustness is still far from optimal, as they may generate significantly inconsistent responses due to minor changes in the verbalized instructions. Recent literature has explored this inconsistency issue, highlighting the importance of continued improvement in the robustness of response generation. However, systematic analysis and solutions are still lacking. In this paper, we quantitatively define the inconsistency problem and propose a two-stage training framework consisting of instruction-augmented supervised fine-tuning and consistency alignment training. The first stage helps a model generalize on following instructions via similar instruction augmentations. In the second stage, we improve the diversity and help the model understand which responses are more aligned with human expectations by differentiating subtle differences in similar responses. The training process is accomplished by self-rewards inferred from the trained model at the first stage without referring to external human preference resources. We conduct extensive experiments on recent publicly available LLMs on instruction-following tasks and demonstrate the effectiveness of our training framework.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2973279398</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2973279398</sourcerecordid><originalsourceid>FETCH-proquest_journals_29732793983</originalsourceid><addsrcrecordid>eNqNjUEKwjAURIMgWLR3CLgu1MTadilFUdSNuC9Rf2NK-6P9ieDtzcIDuJl5MA9mxCIh5SIplkJMWEzUpmkqVrnIMhmxw75_DvZtUHP3AH62V08OgYjbhh_VoCEkaq8CnOwdOuJvo3hlkQw5wNuHrzujsQd0MzZuVEcQ_3rK5tvNpdol4eHlgVzdWj9gmGpR5lLkpSwL-Z_1BRXDPVM</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2973279398</pqid></control><display><type>article</type><title>Improving the Robustness of Large Language Models via Consistency Alignment</title><source>Publicly Available Content Database</source><creator>Zhao, Yukun ; Lingyong Yan ; Sun, Weiwei ; Xing, Guoliang ; Wang, Shuaiqiang ; Chong, Meng ; Cheng, Zhicong ; Ren, Zhaochun ; Yin, Dawei</creator><creatorcontrib>Zhao, Yukun ; Lingyong Yan ; Sun, Weiwei ; Xing, Guoliang ; Wang, Shuaiqiang ; Chong, Meng ; Cheng, Zhicong ; Ren, Zhaochun ; Yin, Dawei</creatorcontrib><description>Large language models (LLMs) have shown tremendous success in following user instructions and generating helpful responses. Nevertheless, their robustness is still far from optimal, as they may generate significantly inconsistent responses due to minor changes in the verbalized instructions. Recent literature has explored this inconsistency issue, highlighting the importance of continued improvement in the robustness of response generation. However, systematic analysis and solutions are still lacking. In this paper, we quantitatively define the inconsistency problem and propose a two-stage training framework consisting of instruction-augmented supervised fine-tuning and consistency alignment training. The first stage helps a model generalize on following instructions via similar instruction augmentations. In the second stage, we improve the diversity and help the model understand which responses are more aligned with human expectations by differentiating subtle differences in similar responses. The training process is accomplished by self-rewards inferred from the trained model at the first stage without referring to external human preference resources. We conduct extensive experiments on recent publicly available LLMs on instruction-following tasks and demonstrate the effectiveness of our training framework.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Alignment ; Consistency ; Large language models ; Robustness ; Training</subject><ispartof>arXiv.org, 2024-03</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2973279398?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>776,780,25731,36989,44566</link.rule.ids></links><search><creatorcontrib>Zhao, Yukun</creatorcontrib><creatorcontrib>Lingyong Yan</creatorcontrib><creatorcontrib>Sun, Weiwei</creatorcontrib><creatorcontrib>Xing, Guoliang</creatorcontrib><creatorcontrib>Wang, Shuaiqiang</creatorcontrib><creatorcontrib>Chong, Meng</creatorcontrib><creatorcontrib>Cheng, Zhicong</creatorcontrib><creatorcontrib>Ren, Zhaochun</creatorcontrib><creatorcontrib>Yin, Dawei</creatorcontrib><title>Improving the Robustness of Large Language Models via Consistency Alignment</title><title>arXiv.org</title><description>Large language models (LLMs) have shown tremendous success in following user instructions and generating helpful responses. Nevertheless, their robustness is still far from optimal, as they may generate significantly inconsistent responses due to minor changes in the verbalized instructions. Recent literature has explored this inconsistency issue, highlighting the importance of continued improvement in the robustness of response generation. However, systematic analysis and solutions are still lacking. In this paper, we quantitatively define the inconsistency problem and propose a two-stage training framework consisting of instruction-augmented supervised fine-tuning and consistency alignment training. The first stage helps a model generalize on following instructions via similar instruction augmentations. In the second stage, we improve the diversity and help the model understand which responses are more aligned with human expectations by differentiating subtle differences in similar responses. The training process is accomplished by self-rewards inferred from the trained model at the first stage without referring to external human preference resources. We conduct extensive experiments on recent publicly available LLMs on instruction-following tasks and demonstrate the effectiveness of our training framework.</description><subject>Alignment</subject><subject>Consistency</subject><subject>Large language models</subject><subject>Robustness</subject><subject>Training</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNjUEKwjAURIMgWLR3CLgu1MTadilFUdSNuC9Rf2NK-6P9ieDtzcIDuJl5MA9mxCIh5SIplkJMWEzUpmkqVrnIMhmxw75_DvZtUHP3AH62V08OgYjbhh_VoCEkaq8CnOwdOuJvo3hlkQw5wNuHrzujsQd0MzZuVEcQ_3rK5tvNpdol4eHlgVzdWj9gmGpR5lLkpSwL-Z_1BRXDPVM</recordid><startdate>20240322</startdate><enddate>20240322</enddate><creator>Zhao, Yukun</creator><creator>Lingyong Yan</creator><creator>Sun, Weiwei</creator><creator>Xing, Guoliang</creator><creator>Wang, Shuaiqiang</creator><creator>Chong, Meng</creator><creator>Cheng, Zhicong</creator><creator>Ren, Zhaochun</creator><creator>Yin, Dawei</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240322</creationdate><title>Improving the Robustness of Large Language Models via Consistency Alignment</title><author>Zhao, Yukun ; Lingyong Yan ; Sun, Weiwei ; Xing, Guoliang ; Wang, Shuaiqiang ; Chong, Meng ; Cheng, Zhicong ; Ren, Zhaochun ; Yin, Dawei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_29732793983</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Alignment</topic><topic>Consistency</topic><topic>Large language models</topic><topic>Robustness</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhao, Yukun</creatorcontrib><creatorcontrib>Lingyong Yan</creatorcontrib><creatorcontrib>Sun, Weiwei</creatorcontrib><creatorcontrib>Xing, Guoliang</creatorcontrib><creatorcontrib>Wang, Shuaiqiang</creatorcontrib><creatorcontrib>Chong, Meng</creatorcontrib><creatorcontrib>Cheng, Zhicong</creatorcontrib><creatorcontrib>Ren, Zhaochun</creatorcontrib><creatorcontrib>Yin, Dawei</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Database (Proquest)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhao, Yukun</au><au>Lingyong Yan</au><au>Sun, Weiwei</au><au>Xing, Guoliang</au><au>Wang, Shuaiqiang</au><au>Chong, Meng</au><au>Cheng, Zhicong</au><au>Ren, Zhaochun</au><au>Yin, Dawei</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Improving the Robustness of Large Language Models via Consistency Alignment</atitle><jtitle>arXiv.org</jtitle><date>2024-03-22</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Large language models (LLMs) have shown tremendous success in following user instructions and generating helpful responses. Nevertheless, their robustness is still far from optimal, as they may generate significantly inconsistent responses due to minor changes in the verbalized instructions. Recent literature has explored this inconsistency issue, highlighting the importance of continued improvement in the robustness of response generation. However, systematic analysis and solutions are still lacking. In this paper, we quantitatively define the inconsistency problem and propose a two-stage training framework consisting of instruction-augmented supervised fine-tuning and consistency alignment training. The first stage helps a model generalize on following instructions via similar instruction augmentations. In the second stage, we improve the diversity and help the model understand which responses are more aligned with human expectations by differentiating subtle differences in similar responses. The training process is accomplished by self-rewards inferred from the trained model at the first stage without referring to external human preference resources. We conduct extensive experiments on recent publicly available LLMs on instruction-following tasks and demonstrate the effectiveness of our training framework.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-03
issn 2331-8422
language eng
recordid cdi_proquest_journals_2973279398
source Publicly Available Content Database
subjects Alignment
Consistency
Large language models
Robustness
Training
title Improving the Robustness of Large Language Models via Consistency Alignment
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-31T18%3A08%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Improving%20the%20Robustness%20of%20Large%20Language%20Models%20via%20Consistency%20Alignment&rft.jtitle=arXiv.org&rft.au=Zhao,%20Yukun&rft.date=2024-03-22&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2973279398%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_29732793983%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2973279398&rft_id=info:pmid/&rfr_iscdi=true