Loading…

Direct Language Model Alignment from Online AI Feedback

Direct alignment from preferences (DAP) methods, such as DPO, have recently emerged as efficient alternatives to reinforcement learning from human feedback (RLHF), that do not require a separate reward model. However, the preference datasets used in DAP methods are usually collected ahead of trainin...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2024-02
Main Authors: Guo, Shangmin, Zhang, Biao, Liu, Tianlin, Liu, Tianqi, Khalman, Misha, Llinares, Felipe, Rame, Alexandre, Mesnard, Thomas, Zhao, Yao, Piot, Bilal, Ferret, Johan, Blondel, Mathieu
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Guo, Shangmin
Zhang, Biao
Liu, Tianlin
Liu, Tianqi
Khalman, Misha
Llinares, Felipe
Rame, Alexandre
Mesnard, Thomas
Zhao, Yao
Piot, Bilal
Ferret, Johan
Blondel, Mathieu
description Direct alignment from preferences (DAP) methods, such as DPO, have recently emerged as efficient alternatives to reinforcement learning from human feedback (RLHF), that do not require a separate reward model. However, the preference datasets used in DAP methods are usually collected ahead of training and never updated, thus the feedback is purely offline. Moreover, responses in these datasets are often sampled from a language model distinct from the one being aligned, and since the model evolves over training, the alignment phase is inevitably off-policy. In this study, we posit that online feedback is key and improves DAP methods. Our method, online AI feedback (OAIF), uses an LLM as annotator: on each training iteration, we sample two responses from the current model and prompt the LLM annotator to choose which one is preferred, thus providing online feedback. Despite its simplicity, we demonstrate via human evaluation in several tasks that OAIF outperforms both offline DAP and RLHF methods. We further show that the feedback leveraged in OAIF is easily controllable, via instruction prompts to the LLM annotator.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2923575300</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2923575300</sourcerecordid><originalsourceid>FETCH-proquest_journals_29235753003</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQwd8ksSk0uUfBJzEsvTUxPVfDNT0nNUXDMyUzPy03NK1FIK8rPVfDPy8nMS1Vw9FRwS01NSUpMzuZhYE1LzClO5YXS3AzKbq4hzh66BUX5haWpxSXxWfmlRXlAqXgjSyNjU3NTYwMDY-JUAQCQuTSb</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2923575300</pqid></control><display><type>article</type><title>Direct Language Model Alignment from Online AI Feedback</title><source>Publicly Available Content (ProQuest)</source><creator>Guo, Shangmin ; Zhang, Biao ; Liu, Tianlin ; Liu, Tianqi ; Khalman, Misha ; Llinares, Felipe ; Rame, Alexandre ; Mesnard, Thomas ; Zhao, Yao ; Piot, Bilal ; Ferret, Johan ; Blondel, Mathieu</creator><creatorcontrib>Guo, Shangmin ; Zhang, Biao ; Liu, Tianlin ; Liu, Tianqi ; Khalman, Misha ; Llinares, Felipe ; Rame, Alexandre ; Mesnard, Thomas ; Zhao, Yao ; Piot, Bilal ; Ferret, Johan ; Blondel, Mathieu</creatorcontrib><description>Direct alignment from preferences (DAP) methods, such as DPO, have recently emerged as efficient alternatives to reinforcement learning from human feedback (RLHF), that do not require a separate reward model. However, the preference datasets used in DAP methods are usually collected ahead of training and never updated, thus the feedback is purely offline. Moreover, responses in these datasets are often sampled from a language model distinct from the one being aligned, and since the model evolves over training, the alignment phase is inevitably off-policy. In this study, we posit that online feedback is key and improves DAP methods. Our method, online AI feedback (OAIF), uses an LLM as annotator: on each training iteration, we sample two responses from the current model and prompt the LLM annotator to choose which one is preferred, thus providing online feedback. Despite its simplicity, we demonstrate via human evaluation in several tasks that OAIF outperforms both offline DAP and RLHF methods. We further show that the feedback leveraged in OAIF is easily controllable, via instruction prompts to the LLM annotator.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Alignment ; Controllability ; Datasets ; Feedback ; Iterative methods ; Large language models</subject><ispartof>arXiv.org, 2024-02</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by-nc-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2923575300?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Guo, Shangmin</creatorcontrib><creatorcontrib>Zhang, Biao</creatorcontrib><creatorcontrib>Liu, Tianlin</creatorcontrib><creatorcontrib>Liu, Tianqi</creatorcontrib><creatorcontrib>Khalman, Misha</creatorcontrib><creatorcontrib>Llinares, Felipe</creatorcontrib><creatorcontrib>Rame, Alexandre</creatorcontrib><creatorcontrib>Mesnard, Thomas</creatorcontrib><creatorcontrib>Zhao, Yao</creatorcontrib><creatorcontrib>Piot, Bilal</creatorcontrib><creatorcontrib>Ferret, Johan</creatorcontrib><creatorcontrib>Blondel, Mathieu</creatorcontrib><title>Direct Language Model Alignment from Online AI Feedback</title><title>arXiv.org</title><description>Direct alignment from preferences (DAP) methods, such as DPO, have recently emerged as efficient alternatives to reinforcement learning from human feedback (RLHF), that do not require a separate reward model. However, the preference datasets used in DAP methods are usually collected ahead of training and never updated, thus the feedback is purely offline. Moreover, responses in these datasets are often sampled from a language model distinct from the one being aligned, and since the model evolves over training, the alignment phase is inevitably off-policy. In this study, we posit that online feedback is key and improves DAP methods. Our method, online AI feedback (OAIF), uses an LLM as annotator: on each training iteration, we sample two responses from the current model and prompt the LLM annotator to choose which one is preferred, thus providing online feedback. Despite its simplicity, we demonstrate via human evaluation in several tasks that OAIF outperforms both offline DAP and RLHF methods. We further show that the feedback leveraged in OAIF is easily controllable, via instruction prompts to the LLM annotator.</description><subject>Alignment</subject><subject>Controllability</subject><subject>Datasets</subject><subject>Feedback</subject><subject>Iterative methods</subject><subject>Large language models</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQwd8ksSk0uUfBJzEsvTUxPVfDNT0nNUXDMyUzPy03NK1FIK8rPVfDPy8nMS1Vw9FRwS01NSUpMzuZhYE1LzClO5YXS3AzKbq4hzh66BUX5haWpxSXxWfmlRXlAqXgjSyNjU3NTYwMDY-JUAQCQuTSb</recordid><startdate>20240229</startdate><enddate>20240229</enddate><creator>Guo, Shangmin</creator><creator>Zhang, Biao</creator><creator>Liu, Tianlin</creator><creator>Liu, Tianqi</creator><creator>Khalman, Misha</creator><creator>Llinares, Felipe</creator><creator>Rame, Alexandre</creator><creator>Mesnard, Thomas</creator><creator>Zhao, Yao</creator><creator>Piot, Bilal</creator><creator>Ferret, Johan</creator><creator>Blondel, Mathieu</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240229</creationdate><title>Direct Language Model Alignment from Online AI Feedback</title><author>Guo, Shangmin ; Zhang, Biao ; Liu, Tianlin ; Liu, Tianqi ; Khalman, Misha ; Llinares, Felipe ; Rame, Alexandre ; Mesnard, Thomas ; Zhao, Yao ; Piot, Bilal ; Ferret, Johan ; Blondel, Mathieu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_29235753003</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Alignment</topic><topic>Controllability</topic><topic>Datasets</topic><topic>Feedback</topic><topic>Iterative methods</topic><topic>Large language models</topic><toplevel>online_resources</toplevel><creatorcontrib>Guo, Shangmin</creatorcontrib><creatorcontrib>Zhang, Biao</creatorcontrib><creatorcontrib>Liu, Tianlin</creatorcontrib><creatorcontrib>Liu, Tianqi</creatorcontrib><creatorcontrib>Khalman, Misha</creatorcontrib><creatorcontrib>Llinares, Felipe</creatorcontrib><creatorcontrib>Rame, Alexandre</creatorcontrib><creatorcontrib>Mesnard, Thomas</creatorcontrib><creatorcontrib>Zhao, Yao</creatorcontrib><creatorcontrib>Piot, Bilal</creatorcontrib><creatorcontrib>Ferret, Johan</creatorcontrib><creatorcontrib>Blondel, Mathieu</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>ProQuest Engineering Database</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Guo, Shangmin</au><au>Zhang, Biao</au><au>Liu, Tianlin</au><au>Liu, Tianqi</au><au>Khalman, Misha</au><au>Llinares, Felipe</au><au>Rame, Alexandre</au><au>Mesnard, Thomas</au><au>Zhao, Yao</au><au>Piot, Bilal</au><au>Ferret, Johan</au><au>Blondel, Mathieu</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Direct Language Model Alignment from Online AI Feedback</atitle><jtitle>arXiv.org</jtitle><date>2024-02-29</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Direct alignment from preferences (DAP) methods, such as DPO, have recently emerged as efficient alternatives to reinforcement learning from human feedback (RLHF), that do not require a separate reward model. However, the preference datasets used in DAP methods are usually collected ahead of training and never updated, thus the feedback is purely offline. Moreover, responses in these datasets are often sampled from a language model distinct from the one being aligned, and since the model evolves over training, the alignment phase is inevitably off-policy. In this study, we posit that online feedback is key and improves DAP methods. Our method, online AI feedback (OAIF), uses an LLM as annotator: on each training iteration, we sample two responses from the current model and prompt the LLM annotator to choose which one is preferred, thus providing online feedback. Despite its simplicity, we demonstrate via human evaluation in several tasks that OAIF outperforms both offline DAP and RLHF methods. We further show that the feedback leveraged in OAIF is easily controllable, via instruction prompts to the LLM annotator.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-02
issn 2331-8422
language eng
recordid cdi_proquest_journals_2923575300
source Publicly Available Content (ProQuest)
subjects Alignment
Controllability
Datasets
Feedback
Iterative methods
Large language models
title Direct Language Model Alignment from Online AI Feedback
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T05%3A12%3A57IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Direct%20Language%20Model%20Alignment%20from%20Online%20AI%20Feedback&rft.jtitle=arXiv.org&rft.au=Guo,%20Shangmin&rft.date=2024-02-29&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2923575300%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_29235753003%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2923575300&rft_id=info:pmid/&rfr_iscdi=true