Loading…
Reframing Human-AI Collaboration for Generating Free-Text Explanations
Large language models are increasingly capable of generating fluent-appearing text with relatively little task-specific supervision. But can these models accurately explain classification decisions? We consider the task of generating free-text explanations using human-written examples in a few-shot...
Saved in:
Published in: | arXiv.org 2022-05 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Wiegreffe, Sarah Hessel, Jack Swayamdipta, Swabha Riedl, Mark Choi, Yejin |
description | Large language models are increasingly capable of generating fluent-appearing text with relatively little task-specific supervision. But can these models accurately explain classification decisions? We consider the task of generating free-text explanations using human-written examples in a few-shot manner. We find that (1) authoring higher quality prompts results in higher quality generations; and (2) surprisingly, in a head-to-head comparison, crowdworkers often prefer explanations generated by GPT-3 to crowdsourced explanations in existing datasets. Our human studies also show, however, that while models often produce factual, grammatical, and sufficient explanations, they have room to improve along axes such as providing novel information and supporting the label. We create a pipeline that combines GPT-3 with a supervised filter that incorporates binary acceptability judgments from humans in the loop. Despite the intrinsic subjectivity of acceptability judgments, we demonstrate that acceptability is partially correlated with various fine-grained attributes of explanations. Our approach is able to consistently filter GPT-3-generated explanations deemed acceptable by humans. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2611010828</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2611010828</sourcerecordid><originalsourceid>FETCH-proquest_journals_26110108283</originalsourceid><addsrcrecordid>eNqNytEKgjAYhuERBEl5D4OOB9u_tJ2GaHYansuC31DmZpsDLz-LLqCjj5fv2ZAEpBRMnQB2JA1h4JxDfoYskwmp7th5Pfb2Ses4assuN1o4Y_TDeT33ztLOeXpFi59cVeURWYPLTMtlMtp-UTiQbadNwPS3e3Ksyqao2eTdK2KY28FFb9erhVwILrgCJf9Tb46iOu4</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2611010828</pqid></control><display><type>article</type><title>Reframing Human-AI Collaboration for Generating Free-Text Explanations</title><source>Publicly Available Content Database</source><creator>Wiegreffe, Sarah ; Hessel, Jack ; Swayamdipta, Swabha ; Riedl, Mark ; Choi, Yejin</creator><creatorcontrib>Wiegreffe, Sarah ; Hessel, Jack ; Swayamdipta, Swabha ; Riedl, Mark ; Choi, Yejin</creatorcontrib><description>Large language models are increasingly capable of generating fluent-appearing text with relatively little task-specific supervision. But can these models accurately explain classification decisions? We consider the task of generating free-text explanations using human-written examples in a few-shot manner. We find that (1) authoring higher quality prompts results in higher quality generations; and (2) surprisingly, in a head-to-head comparison, crowdworkers often prefer explanations generated by GPT-3 to crowdsourced explanations in existing datasets. Our human studies also show, however, that while models often produce factual, grammatical, and sufficient explanations, they have room to improve along axes such as providing novel information and supporting the label. We create a pipeline that combines GPT-3 with a supervised filter that incorporates binary acceptability judgments from humans in the loop. Despite the intrinsic subjectivity of acceptability judgments, we demonstrate that acceptability is partially correlated with various fine-grained attributes of explanations. Our approach is able to consistently filter GPT-3-generated explanations deemed acceptable by humans.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Acceptability</subject><ispartof>arXiv.org, 2022-05</ispartof><rights>2022. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2611010828?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>777,781,25734,36993,44571</link.rule.ids></links><search><creatorcontrib>Wiegreffe, Sarah</creatorcontrib><creatorcontrib>Hessel, Jack</creatorcontrib><creatorcontrib>Swayamdipta, Swabha</creatorcontrib><creatorcontrib>Riedl, Mark</creatorcontrib><creatorcontrib>Choi, Yejin</creatorcontrib><title>Reframing Human-AI Collaboration for Generating Free-Text Explanations</title><title>arXiv.org</title><description>Large language models are increasingly capable of generating fluent-appearing text with relatively little task-specific supervision. But can these models accurately explain classification decisions? We consider the task of generating free-text explanations using human-written examples in a few-shot manner. We find that (1) authoring higher quality prompts results in higher quality generations; and (2) surprisingly, in a head-to-head comparison, crowdworkers often prefer explanations generated by GPT-3 to crowdsourced explanations in existing datasets. Our human studies also show, however, that while models often produce factual, grammatical, and sufficient explanations, they have room to improve along axes such as providing novel information and supporting the label. We create a pipeline that combines GPT-3 with a supervised filter that incorporates binary acceptability judgments from humans in the loop. Despite the intrinsic subjectivity of acceptability judgments, we demonstrate that acceptability is partially correlated with various fine-grained attributes of explanations. Our approach is able to consistently filter GPT-3-generated explanations deemed acceptable by humans.</description><subject>Acceptability</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNytEKgjAYhuERBEl5D4OOB9u_tJ2GaHYansuC31DmZpsDLz-LLqCjj5fv2ZAEpBRMnQB2JA1h4JxDfoYskwmp7th5Pfb2Ses4assuN1o4Y_TDeT33ztLOeXpFi59cVeURWYPLTMtlMtp-UTiQbadNwPS3e3Ksyqao2eTdK2KY28FFb9erhVwILrgCJf9Tb46iOu4</recordid><startdate>20220504</startdate><enddate>20220504</enddate><creator>Wiegreffe, Sarah</creator><creator>Hessel, Jack</creator><creator>Swayamdipta, Swabha</creator><creator>Riedl, Mark</creator><creator>Choi, Yejin</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20220504</creationdate><title>Reframing Human-AI Collaboration for Generating Free-Text Explanations</title><author>Wiegreffe, Sarah ; Hessel, Jack ; Swayamdipta, Swabha ; Riedl, Mark ; Choi, Yejin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_26110108283</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Acceptability</topic><toplevel>online_resources</toplevel><creatorcontrib>Wiegreffe, Sarah</creatorcontrib><creatorcontrib>Hessel, Jack</creatorcontrib><creatorcontrib>Swayamdipta, Swabha</creatorcontrib><creatorcontrib>Riedl, Mark</creatorcontrib><creatorcontrib>Choi, Yejin</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wiegreffe, Sarah</au><au>Hessel, Jack</au><au>Swayamdipta, Swabha</au><au>Riedl, Mark</au><au>Choi, Yejin</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Reframing Human-AI Collaboration for Generating Free-Text Explanations</atitle><jtitle>arXiv.org</jtitle><date>2022-05-04</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>Large language models are increasingly capable of generating fluent-appearing text with relatively little task-specific supervision. But can these models accurately explain classification decisions? We consider the task of generating free-text explanations using human-written examples in a few-shot manner. We find that (1) authoring higher quality prompts results in higher quality generations; and (2) surprisingly, in a head-to-head comparison, crowdworkers often prefer explanations generated by GPT-3 to crowdsourced explanations in existing datasets. Our human studies also show, however, that while models often produce factual, grammatical, and sufficient explanations, they have room to improve along axes such as providing novel information and supporting the label. We create a pipeline that combines GPT-3 with a supervised filter that incorporates binary acceptability judgments from humans in the loop. Despite the intrinsic subjectivity of acceptability judgments, we demonstrate that acceptability is partially correlated with various fine-grained attributes of explanations. Our approach is able to consistently filter GPT-3-generated explanations deemed acceptable by humans.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2022-05 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2611010828 |
source | Publicly Available Content Database |
subjects | Acceptability |
title | Reframing Human-AI Collaboration for Generating Free-Text Explanations |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T21%3A50%3A32IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Reframing%20Human-AI%20Collaboration%20for%20Generating%20Free-Text%20Explanations&rft.jtitle=arXiv.org&rft.au=Wiegreffe,%20Sarah&rft.date=2022-05-04&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2611010828%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_26110108283%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2611010828&rft_id=info:pmid/&rfr_iscdi=true |