Loading…

Sequential Explanations with Mental Model-Based Policies

The act of explaining across two parties is a feedback loop, where one provides information on what needs to be explained and the other provides an explanation relevant to this information. We apply a reinforcement learning framework which emulates this format by providing explanations based on the...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2020-07
Main Authors: Yeung, Arnold YS, Joshi, Shalmali, Williams, Joseph Jay, Rudzicz, Frank
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Yeung, Arnold YS
Joshi, Shalmali
Williams, Joseph Jay
Rudzicz, Frank
description The act of explaining across two parties is a feedback loop, where one provides information on what needs to be explained and the other provides an explanation relevant to this information. We apply a reinforcement learning framework which emulates this format by providing explanations based on the explainee's current mental model. We conduct novel online human experiments where explanations generated by various explanation methods are selected and presented to participants, using policies which observe participants' mental models, in order to optimize an interpretability proxy. Our results suggest that mental model-based policies (anchored in our proposed state representation) may increase interpretability over multiple sequential explanations, when compared to a random selection baseline. This work provides insight into how to select explanations which increase relevant information for users, and into conducting human-grounded experimentation to understand interpretability.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2425455549</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2425455549</sourcerecordid><originalsourceid>FETCH-proquest_journals_24254555493</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSwCE4tLE3NK8lMzFFwrSjIScxLLMnMzytWKM8syVDwBcoAJXzzU1JzdJ0Si1NTFALyczKTM1OLeRhY0xJzilN5oTQ3g7Kba4izh25BUT7QxOKS-Kz80qI8oFS8kYmRqYmpqamJpTFxqgBQtTYa</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2425455549</pqid></control><display><type>article</type><title>Sequential Explanations with Mental Model-Based Policies</title><source>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</source><creator>Yeung, Arnold YS ; Joshi, Shalmali ; Williams, Joseph Jay ; Rudzicz, Frank</creator><creatorcontrib>Yeung, Arnold YS ; Joshi, Shalmali ; Williams, Joseph Jay ; Rudzicz, Frank</creatorcontrib><description>The act of explaining across two parties is a feedback loop, where one provides information on what needs to be explained and the other provides an explanation relevant to this information. We apply a reinforcement learning framework which emulates this format by providing explanations based on the explainee's current mental model. We conduct novel online human experiments where explanations generated by various explanation methods are selected and presented to participants, using policies which observe participants' mental models, in order to optimize an interpretability proxy. Our results suggest that mental model-based policies (anchored in our proposed state representation) may increase interpretability over multiple sequential explanations, when compared to a random selection baseline. This work provides insight into how to select explanations which increase relevant information for users, and into conducting human-grounded experimentation to understand interpretability.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Experimentation ; Feedback loops ; Policies</subject><ispartof>arXiv.org, 2020-07</ispartof><rights>2020. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2425455549?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25751,37010,44588</link.rule.ids></links><search><creatorcontrib>Yeung, Arnold YS</creatorcontrib><creatorcontrib>Joshi, Shalmali</creatorcontrib><creatorcontrib>Williams, Joseph Jay</creatorcontrib><creatorcontrib>Rudzicz, Frank</creatorcontrib><title>Sequential Explanations with Mental Model-Based Policies</title><title>arXiv.org</title><description>The act of explaining across two parties is a feedback loop, where one provides information on what needs to be explained and the other provides an explanation relevant to this information. We apply a reinforcement learning framework which emulates this format by providing explanations based on the explainee's current mental model. We conduct novel online human experiments where explanations generated by various explanation methods are selected and presented to participants, using policies which observe participants' mental models, in order to optimize an interpretability proxy. Our results suggest that mental model-based policies (anchored in our proposed state representation) may increase interpretability over multiple sequential explanations, when compared to a random selection baseline. This work provides insight into how to select explanations which increase relevant information for users, and into conducting human-grounded experimentation to understand interpretability.</description><subject>Experimentation</subject><subject>Feedback loops</subject><subject>Policies</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSwCE4tLE3NK8lMzFFwrSjIScxLLMnMzytWKM8syVDwBcoAJXzzU1JzdJ0Si1NTFALyczKTM1OLeRhY0xJzilN5oTQ3g7Kba4izh25BUT7QxOKS-Kz80qI8oFS8kYmRqYmpqamJpTFxqgBQtTYa</recordid><startdate>20200717</startdate><enddate>20200717</enddate><creator>Yeung, Arnold YS</creator><creator>Joshi, Shalmali</creator><creator>Williams, Joseph Jay</creator><creator>Rudzicz, Frank</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20200717</creationdate><title>Sequential Explanations with Mental Model-Based Policies</title><author>Yeung, Arnold YS ; Joshi, Shalmali ; Williams, Joseph Jay ; Rudzicz, Frank</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_24254555493</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Experimentation</topic><topic>Feedback loops</topic><topic>Policies</topic><toplevel>online_resources</toplevel><creatorcontrib>Yeung, Arnold YS</creatorcontrib><creatorcontrib>Joshi, Shalmali</creatorcontrib><creatorcontrib>Williams, Joseph Jay</creatorcontrib><creatorcontrib>Rudzicz, Frank</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yeung, Arnold YS</au><au>Joshi, Shalmali</au><au>Williams, Joseph Jay</au><au>Rudzicz, Frank</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Sequential Explanations with Mental Model-Based Policies</atitle><jtitle>arXiv.org</jtitle><date>2020-07-17</date><risdate>2020</risdate><eissn>2331-8422</eissn><abstract>The act of explaining across two parties is a feedback loop, where one provides information on what needs to be explained and the other provides an explanation relevant to this information. We apply a reinforcement learning framework which emulates this format by providing explanations based on the explainee's current mental model. We conduct novel online human experiments where explanations generated by various explanation methods are selected and presented to participants, using policies which observe participants' mental models, in order to optimize an interpretability proxy. Our results suggest that mental model-based policies (anchored in our proposed state representation) may increase interpretability over multiple sequential explanations, when compared to a random selection baseline. This work provides insight into how to select explanations which increase relevant information for users, and into conducting human-grounded experimentation to understand interpretability.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2020-07
issn 2331-8422
language eng
recordid cdi_proquest_journals_2425455549
source Publicly Available Content Database (Proquest) (PQ_SDU_P3)
subjects Experimentation
Feedback loops
Policies
title Sequential Explanations with Mental Model-Based Policies
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T11%3A13%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Sequential%20Explanations%20with%20Mental%20Model-Based%20Policies&rft.jtitle=arXiv.org&rft.au=Yeung,%20Arnold%20YS&rft.date=2020-07-17&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2425455549%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_24254555493%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2425455549&rft_id=info:pmid/&rfr_iscdi=true