Loading…

Online Explanation Generation for Planning Tasks in Human-Robot Teaming

As AI becomes an integral part of our lives, the development of explainable AI, embodied in the decision-making process of an AI or robotic agent, becomes imperative. For a robotic teammate, the ability to generate explanations to justify its behavior is one of the key requirements of explainable ag...

Full description

Saved in:
Bibliographic Details
Main Authors: Zakershahrak, Mehrdad, Gong, Ze, Sadassivam, Nikhillesh, Zhang, Yu
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 6310
container_issue
container_start_page 6304
container_title
container_volume
creator Zakershahrak, Mehrdad
Gong, Ze
Sadassivam, Nikhillesh
Zhang, Yu
description As AI becomes an integral part of our lives, the development of explainable AI, embodied in the decision-making process of an AI or robotic agent, becomes imperative. For a robotic teammate, the ability to generate explanations to justify its behavior is one of the key requirements of explainable agency. Prior work on explanation generation has been focused on supporting the rationale behind the robot's decision or behavior. These approaches, however, fail to consider the mental demand for understanding the received explanation. In other words, the human teammate is expected to understand an explanation no matter how much information is presented. In this work, we argue that explanations, especially those of a complex nature, should be made in an online fashion during the execution, which helps spread out the information to be explained and thus reduce the mental workload of humans in highly cognitive demanding tasks. However, a challenge here is that the different parts of an explanation may be dependent on each other, which must be taken into account when generating online explanations. To this end, a general formulation of online explanation generation is presented with three variations satisfying different "online" properties. The new explanation generation methods are based on a model reconciliation setting introduced in our prior work. We evaluated our methods both with human subjects in a simulated rover domain, using NASA Task Load Index (TLX), and synthetically with ten different problems across two standard IPC domains. Results strongly suggest that our methods generate explanations that are perceived as less cognitively demanding and much preferred over the baselines and are computationally efficient.
doi_str_mv 10.1109/IROS45743.2020.9341792
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_9341792</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9341792</ieee_id><sourcerecordid>9341792</sourcerecordid><originalsourceid>FETCH-LOGICAL-i203t-4f6d0ff8105d7f179b80820a05ae846e401e64c02b55c56a46ebabb38bcdeba03</originalsourceid><addsrcrecordid>eNotj91KwzAcxaMgOGafQJC8QOs_38mljNkNBpVZr0fSJhJt09FW0Le3sF2dw_nB4RyEnggUhIB53h-rdy4UZwUFCoVhnChDb1BmlCaKaiIpofIWrSgRLAct5T3KpukLAAgoo41cobJKXUweb3_PnU12jkPCpU9-vNgwjPhtASmmT1zb6XvCMeHdT29TfhzcMOPa236BD-gu2G7y2VXX6ON1W292-aEq95uXQx4psDnnQbYQgiYgWhWWuU6DpmBBWK-59ByIl7wB6oRohLRL5KxzTLumXRywNXq89Ebv_ek8xt6Of6frdfYPAjdN8Q</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Online Explanation Generation for Planning Tasks in Human-Robot Teaming</title><source>IEEE Xplore All Conference Series</source><creator>Zakershahrak, Mehrdad ; Gong, Ze ; Sadassivam, Nikhillesh ; Zhang, Yu</creator><creatorcontrib>Zakershahrak, Mehrdad ; Gong, Ze ; Sadassivam, Nikhillesh ; Zhang, Yu</creatorcontrib><description>As AI becomes an integral part of our lives, the development of explainable AI, embodied in the decision-making process of an AI or robotic agent, becomes imperative. For a robotic teammate, the ability to generate explanations to justify its behavior is one of the key requirements of explainable agency. Prior work on explanation generation has been focused on supporting the rationale behind the robot's decision or behavior. These approaches, however, fail to consider the mental demand for understanding the received explanation. In other words, the human teammate is expected to understand an explanation no matter how much information is presented. In this work, we argue that explanations, especially those of a complex nature, should be made in an online fashion during the execution, which helps spread out the information to be explained and thus reduce the mental workload of humans in highly cognitive demanding tasks. However, a challenge here is that the different parts of an explanation may be dependent on each other, which must be taken into account when generating online explanations. To this end, a general formulation of online explanation generation is presented with three variations satisfying different "online" properties. The new explanation generation methods are based on a model reconciliation setting introduced in our prior work. We evaluated our methods both with human subjects in a simulated rover domain, using NASA Task Load Index (TLX), and synthetically with ten different problems across two standard IPC domains. Results strongly suggest that our methods generate explanations that are perceived as less cognitively demanding and much preferred over the baselines and are computationally efficient.</description><identifier>EISSN: 2153-0866</identifier><identifier>EISBN: 9781728162126</identifier><identifier>EISBN: 1728162122</identifier><identifier>DOI: 10.1109/IROS45743.2020.9341792</identifier><language>eng</language><publisher>IEEE</publisher><subject>Artificial intelligence ; Intelligent robots ; Load modeling ; NASA ; Planning ; Task analysis</subject><ispartof>2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020, p.6304-6310</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9341792$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,27923,54553,54930</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9341792$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Zakershahrak, Mehrdad</creatorcontrib><creatorcontrib>Gong, Ze</creatorcontrib><creatorcontrib>Sadassivam, Nikhillesh</creatorcontrib><creatorcontrib>Zhang, Yu</creatorcontrib><title>Online Explanation Generation for Planning Tasks in Human-Robot Teaming</title><title>2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)</title><addtitle>IROS</addtitle><description>As AI becomes an integral part of our lives, the development of explainable AI, embodied in the decision-making process of an AI or robotic agent, becomes imperative. For a robotic teammate, the ability to generate explanations to justify its behavior is one of the key requirements of explainable agency. Prior work on explanation generation has been focused on supporting the rationale behind the robot's decision or behavior. These approaches, however, fail to consider the mental demand for understanding the received explanation. In other words, the human teammate is expected to understand an explanation no matter how much information is presented. In this work, we argue that explanations, especially those of a complex nature, should be made in an online fashion during the execution, which helps spread out the information to be explained and thus reduce the mental workload of humans in highly cognitive demanding tasks. However, a challenge here is that the different parts of an explanation may be dependent on each other, which must be taken into account when generating online explanations. To this end, a general formulation of online explanation generation is presented with three variations satisfying different "online" properties. The new explanation generation methods are based on a model reconciliation setting introduced in our prior work. We evaluated our methods both with human subjects in a simulated rover domain, using NASA Task Load Index (TLX), and synthetically with ten different problems across two standard IPC domains. Results strongly suggest that our methods generate explanations that are perceived as less cognitively demanding and much preferred over the baselines and are computationally efficient.</description><subject>Artificial intelligence</subject><subject>Intelligent robots</subject><subject>Load modeling</subject><subject>NASA</subject><subject>Planning</subject><subject>Task analysis</subject><issn>2153-0866</issn><isbn>9781728162126</isbn><isbn>1728162122</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2020</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotj91KwzAcxaMgOGafQJC8QOs_38mljNkNBpVZr0fSJhJt09FW0Le3sF2dw_nB4RyEnggUhIB53h-rdy4UZwUFCoVhnChDb1BmlCaKaiIpofIWrSgRLAct5T3KpukLAAgoo41cobJKXUweb3_PnU12jkPCpU9-vNgwjPhtASmmT1zb6XvCMeHdT29TfhzcMOPa236BD-gu2G7y2VXX6ON1W292-aEq95uXQx4psDnnQbYQgiYgWhWWuU6DpmBBWK-59ByIl7wB6oRohLRL5KxzTLumXRywNXq89Ebv_ek8xt6Of6frdfYPAjdN8Q</recordid><startdate>20201024</startdate><enddate>20201024</enddate><creator>Zakershahrak, Mehrdad</creator><creator>Gong, Ze</creator><creator>Sadassivam, Nikhillesh</creator><creator>Zhang, Yu</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>20201024</creationdate><title>Online Explanation Generation for Planning Tasks in Human-Robot Teaming</title><author>Zakershahrak, Mehrdad ; Gong, Ze ; Sadassivam, Nikhillesh ; Zhang, Yu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i203t-4f6d0ff8105d7f179b80820a05ae846e401e64c02b55c56a46ebabb38bcdeba03</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Artificial intelligence</topic><topic>Intelligent robots</topic><topic>Load modeling</topic><topic>NASA</topic><topic>Planning</topic><topic>Task analysis</topic><toplevel>online_resources</toplevel><creatorcontrib>Zakershahrak, Mehrdad</creatorcontrib><creatorcontrib>Gong, Ze</creatorcontrib><creatorcontrib>Sadassivam, Nikhillesh</creatorcontrib><creatorcontrib>Zhang, Yu</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zakershahrak, Mehrdad</au><au>Gong, Ze</au><au>Sadassivam, Nikhillesh</au><au>Zhang, Yu</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Online Explanation Generation for Planning Tasks in Human-Robot Teaming</atitle><btitle>2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)</btitle><stitle>IROS</stitle><date>2020-10-24</date><risdate>2020</risdate><spage>6304</spage><epage>6310</epage><pages>6304-6310</pages><eissn>2153-0866</eissn><eisbn>9781728162126</eisbn><eisbn>1728162122</eisbn><abstract>As AI becomes an integral part of our lives, the development of explainable AI, embodied in the decision-making process of an AI or robotic agent, becomes imperative. For a robotic teammate, the ability to generate explanations to justify its behavior is one of the key requirements of explainable agency. Prior work on explanation generation has been focused on supporting the rationale behind the robot's decision or behavior. These approaches, however, fail to consider the mental demand for understanding the received explanation. In other words, the human teammate is expected to understand an explanation no matter how much information is presented. In this work, we argue that explanations, especially those of a complex nature, should be made in an online fashion during the execution, which helps spread out the information to be explained and thus reduce the mental workload of humans in highly cognitive demanding tasks. However, a challenge here is that the different parts of an explanation may be dependent on each other, which must be taken into account when generating online explanations. To this end, a general formulation of online explanation generation is presented with three variations satisfying different "online" properties. The new explanation generation methods are based on a model reconciliation setting introduced in our prior work. We evaluated our methods both with human subjects in a simulated rover domain, using NASA Task Load Index (TLX), and synthetically with ten different problems across two standard IPC domains. Results strongly suggest that our methods generate explanations that are perceived as less cognitively demanding and much preferred over the baselines and are computationally efficient.</abstract><pub>IEEE</pub><doi>10.1109/IROS45743.2020.9341792</doi><tpages>7</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2153-0866
ispartof 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020, p.6304-6310
issn 2153-0866
language eng
recordid cdi_ieee_primary_9341792
source IEEE Xplore All Conference Series
subjects Artificial intelligence
Intelligent robots
Load modeling
NASA
Planning
Task analysis
title Online Explanation Generation for Planning Tasks in Human-Robot Teaming
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-13T19%3A41%3A16IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Online%20Explanation%20Generation%20for%20Planning%20Tasks%20in%20Human-Robot%20Teaming&rft.btitle=2020%20IEEE/RSJ%20International%20Conference%20on%20Intelligent%20Robots%20and%20Systems%20(IROS)&rft.au=Zakershahrak,%20Mehrdad&rft.date=2020-10-24&rft.spage=6304&rft.epage=6310&rft.pages=6304-6310&rft.eissn=2153-0866&rft_id=info:doi/10.1109/IROS45743.2020.9341792&rft.eisbn=9781728162126&rft.eisbn_list=1728162122&rft_dat=%3Cieee_CHZPO%3E9341792%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i203t-4f6d0ff8105d7f179b80820a05ae846e401e64c02b55c56a46ebabb38bcdeba03%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=9341792&rfr_iscdi=true