Loading…

Verbal Explanations for Deep Reinforcement Learning Neural Networks with Attention on Extracted Features

In recent years, there has been increasing interest in transparency in Deep Neural Networks. Most of the works on transparency have been done for image classification. In this paper, we report on work of transparency in Deep Reinforcement Learning Networks (DRLNs). Such networks have been extremely...

Full description

Saved in:
Bibliographic Details
Main Authors: Wang, Xinzhi, Yuan, Shengcheng, Zhang, Hui, Lewis, Michael, Sycara, Katia
Format: Conference Proceeding
Language:English
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 7
container_issue
container_start_page 1
container_title
container_volume
creator Wang, Xinzhi
Yuan, Shengcheng
Zhang, Hui
Lewis, Michael
Sycara, Katia
description In recent years, there has been increasing interest in transparency in Deep Neural Networks. Most of the works on transparency have been done for image classification. In this paper, we report on work of transparency in Deep Reinforcement Learning Networks (DRLNs). Such networks have been extremely successful in learning action control in Atari games. In this paper, we focus on generating verbal (natural language) descriptions and explanations of deep reinforcement learning policies. Successful generation of verbal explanations would allow better understanding by people (e.g., users, debuggers) of the inner workings of DRLNs which could ultimately increase trust in these systems. We present a generation model which consists of three parts: an encoder on feature extraction, an attention structure on selecting features from the output of the encoder, and a decoder on generating the explanation in natural language. Four variants of the attention structure full attention, global attention, adaptive attention and object attention - are designed and compared. The adaptive attention structure performs the best among all the variants, even though the object attention structure is given additional information on object locations. Additionally, our experiment results showed that the proposed encoder outperforms two baseline encoders (Resnet and VGG) on the capability of distinguishing the game state images.
doi_str_mv 10.1109/RO-MAN46459.2019.8956301
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_8956301</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8956301</ieee_id><sourcerecordid>8956301</sourcerecordid><originalsourceid>FETCH-LOGICAL-i253t-5d672730293f4497663eafc6ce96390e6a972019f42dae0c44aeef24278b114e3</originalsourceid><addsrcrecordid>eNotUF1Lw0AQPAXBUvMLfLk_kHgfm7vsY6mpCjGFor6Wa7qxp21aLlda_70RuwwMAzMDs4xxKTIpBT4s5unrpAYDOWZKSMwKzI0W8oolaAtpVSGVUcpes5FEgBRB21uW9P2XGA5BSpuP2OaDwspteXk-bF3not93PW_3gT8SHfiCfDeIhnbURV6RC53vPnlNxzBkaoqnffju-cnHDZ_EOJiGPB9QnmNwTaQ1n5GLx0D9Hbtp3ban5MJj9j4r36bPaTV_eplOqtSrXMc0XxurrBYKdQuA1hhNrm1MQ2g0CjIO7d_aFtTakWgAHFGrQNliJSWQHrP7_15PRMtD8DsXfpaX3-hfX31aYA</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Verbal Explanations for Deep Reinforcement Learning Neural Networks with Attention on Extracted Features</title><source>IEEE Xplore All Conference Series</source><creator>Wang, Xinzhi ; Yuan, Shengcheng ; Zhang, Hui ; Lewis, Michael ; Sycara, Katia</creator><creatorcontrib>Wang, Xinzhi ; Yuan, Shengcheng ; Zhang, Hui ; Lewis, Michael ; Sycara, Katia</creatorcontrib><description>In recent years, there has been increasing interest in transparency in Deep Neural Networks. Most of the works on transparency have been done for image classification. In this paper, we report on work of transparency in Deep Reinforcement Learning Networks (DRLNs). Such networks have been extremely successful in learning action control in Atari games. In this paper, we focus on generating verbal (natural language) descriptions and explanations of deep reinforcement learning policies. Successful generation of verbal explanations would allow better understanding by people (e.g., users, debuggers) of the inner workings of DRLNs which could ultimately increase trust in these systems. We present a generation model which consists of three parts: an encoder on feature extraction, an attention structure on selecting features from the output of the encoder, and a decoder on generating the explanation in natural language. Four variants of the attention structure full attention, global attention, adaptive attention and object attention - are designed and compared. The adaptive attention structure performs the best among all the variants, even though the object attention structure is given additional information on object locations. Additionally, our experiment results showed that the proposed encoder outperforms two baseline encoders (Resnet and VGG) on the capability of distinguishing the game state images.</description><identifier>EISSN: 1944-9437</identifier><identifier>EISBN: 9781728126227</identifier><identifier>EISBN: 1728126223</identifier><identifier>DOI: 10.1109/RO-MAN46459.2019.8956301</identifier><language>eng</language><publisher>IEEE</publisher><ispartof>2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), 2019, p.1-7</ispartof><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8956301$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,27924,54554,54931</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8956301$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Wang, Xinzhi</creatorcontrib><creatorcontrib>Yuan, Shengcheng</creatorcontrib><creatorcontrib>Zhang, Hui</creatorcontrib><creatorcontrib>Lewis, Michael</creatorcontrib><creatorcontrib>Sycara, Katia</creatorcontrib><title>Verbal Explanations for Deep Reinforcement Learning Neural Networks with Attention on Extracted Features</title><title>2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)</title><addtitle>ROMAN</addtitle><description>In recent years, there has been increasing interest in transparency in Deep Neural Networks. Most of the works on transparency have been done for image classification. In this paper, we report on work of transparency in Deep Reinforcement Learning Networks (DRLNs). Such networks have been extremely successful in learning action control in Atari games. In this paper, we focus on generating verbal (natural language) descriptions and explanations of deep reinforcement learning policies. Successful generation of verbal explanations would allow better understanding by people (e.g., users, debuggers) of the inner workings of DRLNs which could ultimately increase trust in these systems. We present a generation model which consists of three parts: an encoder on feature extraction, an attention structure on selecting features from the output of the encoder, and a decoder on generating the explanation in natural language. Four variants of the attention structure full attention, global attention, adaptive attention and object attention - are designed and compared. The adaptive attention structure performs the best among all the variants, even though the object attention structure is given additional information on object locations. Additionally, our experiment results showed that the proposed encoder outperforms two baseline encoders (Resnet and VGG) on the capability of distinguishing the game state images.</description><issn>1944-9437</issn><isbn>9781728126227</isbn><isbn>1728126223</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2019</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotUF1Lw0AQPAXBUvMLfLk_kHgfm7vsY6mpCjGFor6Wa7qxp21aLlda_70RuwwMAzMDs4xxKTIpBT4s5unrpAYDOWZKSMwKzI0W8oolaAtpVSGVUcpes5FEgBRB21uW9P2XGA5BSpuP2OaDwspteXk-bF3not93PW_3gT8SHfiCfDeIhnbURV6RC53vPnlNxzBkaoqnffju-cnHDZ_EOJiGPB9QnmNwTaQ1n5GLx0D9Hbtp3ban5MJj9j4r36bPaTV_eplOqtSrXMc0XxurrBYKdQuA1hhNrm1MQ2g0CjIO7d_aFtTakWgAHFGrQNliJSWQHrP7_15PRMtD8DsXfpaX3-hfX31aYA</recordid><startdate>20191001</startdate><enddate>20191001</enddate><creator>Wang, Xinzhi</creator><creator>Yuan, Shengcheng</creator><creator>Zhang, Hui</creator><creator>Lewis, Michael</creator><creator>Sycara, Katia</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>20191001</creationdate><title>Verbal Explanations for Deep Reinforcement Learning Neural Networks with Attention on Extracted Features</title><author>Wang, Xinzhi ; Yuan, Shengcheng ; Zhang, Hui ; Lewis, Michael ; Sycara, Katia</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i253t-5d672730293f4497663eafc6ce96390e6a972019f42dae0c44aeef24278b114e3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2019</creationdate><toplevel>online_resources</toplevel><creatorcontrib>Wang, Xinzhi</creatorcontrib><creatorcontrib>Yuan, Shengcheng</creatorcontrib><creatorcontrib>Zhang, Hui</creatorcontrib><creatorcontrib>Lewis, Michael</creatorcontrib><creatorcontrib>Sycara, Katia</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wang, Xinzhi</au><au>Yuan, Shengcheng</au><au>Zhang, Hui</au><au>Lewis, Michael</au><au>Sycara, Katia</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Verbal Explanations for Deep Reinforcement Learning Neural Networks with Attention on Extracted Features</atitle><btitle>2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)</btitle><stitle>ROMAN</stitle><date>2019-10-01</date><risdate>2019</risdate><spage>1</spage><epage>7</epage><pages>1-7</pages><eissn>1944-9437</eissn><eisbn>9781728126227</eisbn><eisbn>1728126223</eisbn><abstract>In recent years, there has been increasing interest in transparency in Deep Neural Networks. Most of the works on transparency have been done for image classification. In this paper, we report on work of transparency in Deep Reinforcement Learning Networks (DRLNs). Such networks have been extremely successful in learning action control in Atari games. In this paper, we focus on generating verbal (natural language) descriptions and explanations of deep reinforcement learning policies. Successful generation of verbal explanations would allow better understanding by people (e.g., users, debuggers) of the inner workings of DRLNs which could ultimately increase trust in these systems. We present a generation model which consists of three parts: an encoder on feature extraction, an attention structure on selecting features from the output of the encoder, and a decoder on generating the explanation in natural language. Four variants of the attention structure full attention, global attention, adaptive attention and object attention - are designed and compared. The adaptive attention structure performs the best among all the variants, even though the object attention structure is given additional information on object locations. Additionally, our experiment results showed that the proposed encoder outperforms two baseline encoders (Resnet and VGG) on the capability of distinguishing the game state images.</abstract><pub>IEEE</pub><doi>10.1109/RO-MAN46459.2019.8956301</doi><tpages>7</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 1944-9437
ispartof 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), 2019, p.1-7
issn 1944-9437
language eng
recordid cdi_ieee_primary_8956301
source IEEE Xplore All Conference Series
title Verbal Explanations for Deep Reinforcement Learning Neural Networks with Attention on Extracted Features
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T17%3A48%3A32IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Verbal%20Explanations%20for%20Deep%20Reinforcement%20Learning%20Neural%20Networks%20with%20Attention%20on%20Extracted%20Features&rft.btitle=2019%2028th%20IEEE%20International%20Conference%20on%20Robot%20and%20Human%20Interactive%20Communication%20(RO-MAN)&rft.au=Wang,%20Xinzhi&rft.date=2019-10-01&rft.spage=1&rft.epage=7&rft.pages=1-7&rft.eissn=1944-9437&rft_id=info:doi/10.1109/RO-MAN46459.2019.8956301&rft.eisbn=9781728126227&rft.eisbn_list=1728126223&rft_dat=%3Cieee_CHZPO%3E8956301%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i253t-5d672730293f4497663eafc6ce96390e6a972019f42dae0c44aeef24278b114e3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=8956301&rfr_iscdi=true