Loading…

Off-Dynamics Inverse Reinforcement Learning from Hetero-Domain

We propose an approach for inverse reinforcement learning from hetero-domain which learns a reward function in the simulator, drawing on the demonstrations from the real world. The intuition behind the method is that the reward function should not only be oriented to imitate the experts, but should...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2021-10
Main Authors: Kang, Yachen, Liu, Jinxin, Cao, Xin, Wang, Donglin
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Kang, Yachen
Liu, Jinxin
Cao, Xin
Wang, Donglin
description We propose an approach for inverse reinforcement learning from hetero-domain which learns a reward function in the simulator, drawing on the demonstrations from the real world. The intuition behind the method is that the reward function should not only be oriented to imitate the experts, but should encourage actions adjusted for the dynamics difference between the simulator and the real world. To achieve this, the widely used GAN-inspired IRL method is adopted, and its discriminator, recognizing policy-generating trajectories, is modified with the quantification of dynamics difference. The training process of the discriminator can yield the transferable reward function suitable for simulator dynamics, which can be guaranteed by derivation. Effectively, our method assigns higher rewards for demonstration trajectories which do not exploit discrepancies between the two domains. With extensive experiments on continuous control tasks, our method shows its effectiveness and demonstrates its scalability to high-dimensional tasks.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2585639375</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2585639375</sourcerecordid><originalsourceid>FETCH-proquest_journals_25856393753</originalsourceid><addsrcrecordid>eNqNyrEKwjAUQNEgCBbtPwScAzExbV1crFJBEMS9hPIiKeZFX1rBv9fBD3C6w7kTlimtV6JaKzVjeUq9lFIVpTJGZ2x7dk7Ub7TBd4kf8QWUgF_Ao4vUQQAc-AksoccbdxQDb2AAiqKOwXpcsKmz9wT5r3O2POyvu0Y8KD5HSEPbx5HwS60ylSn0RpdG_3d9AKzwOEo</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2585639375</pqid></control><display><type>article</type><title>Off-Dynamics Inverse Reinforcement Learning from Hetero-Domain</title><source>Publicly Available Content Database</source><creator>Kang, Yachen ; Liu, Jinxin ; Cao, Xin ; Wang, Donglin</creator><creatorcontrib>Kang, Yachen ; Liu, Jinxin ; Cao, Xin ; Wang, Donglin</creatorcontrib><description>We propose an approach for inverse reinforcement learning from hetero-domain which learns a reward function in the simulator, drawing on the demonstrations from the real world. The intuition behind the method is that the reward function should not only be oriented to imitate the experts, but should encourage actions adjusted for the dynamics difference between the simulator and the real world. To achieve this, the widely used GAN-inspired IRL method is adopted, and its discriminator, recognizing policy-generating trajectories, is modified with the quantification of dynamics difference. The training process of the discriminator can yield the transferable reward function suitable for simulator dynamics, which can be guaranteed by derivation. Effectively, our method assigns higher rewards for demonstration trajectories which do not exploit discrepancies between the two domains. With extensive experiments on continuous control tasks, our method shows its effectiveness and demonstrates its scalability to high-dimensional tasks.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Control tasks ; Domains ; Dynamics ; Learning ; Simulation</subject><ispartof>arXiv.org, 2021-10</ispartof><rights>2021. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2585639375?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>778,782,25740,36999,44577</link.rule.ids></links><search><creatorcontrib>Kang, Yachen</creatorcontrib><creatorcontrib>Liu, Jinxin</creatorcontrib><creatorcontrib>Cao, Xin</creatorcontrib><creatorcontrib>Wang, Donglin</creatorcontrib><title>Off-Dynamics Inverse Reinforcement Learning from Hetero-Domain</title><title>arXiv.org</title><description>We propose an approach for inverse reinforcement learning from hetero-domain which learns a reward function in the simulator, drawing on the demonstrations from the real world. The intuition behind the method is that the reward function should not only be oriented to imitate the experts, but should encourage actions adjusted for the dynamics difference between the simulator and the real world. To achieve this, the widely used GAN-inspired IRL method is adopted, and its discriminator, recognizing policy-generating trajectories, is modified with the quantification of dynamics difference. The training process of the discriminator can yield the transferable reward function suitable for simulator dynamics, which can be guaranteed by derivation. Effectively, our method assigns higher rewards for demonstration trajectories which do not exploit discrepancies between the two domains. With extensive experiments on continuous control tasks, our method shows its effectiveness and demonstrates its scalability to high-dimensional tasks.</description><subject>Control tasks</subject><subject>Domains</subject><subject>Dynamics</subject><subject>Learning</subject><subject>Simulation</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNyrEKwjAUQNEgCBbtPwScAzExbV1crFJBEMS9hPIiKeZFX1rBv9fBD3C6w7kTlimtV6JaKzVjeUq9lFIVpTJGZ2x7dk7Ub7TBd4kf8QWUgF_Ao4vUQQAc-AksoccbdxQDb2AAiqKOwXpcsKmz9wT5r3O2POyvu0Y8KD5HSEPbx5HwS60ylSn0RpdG_3d9AKzwOEo</recordid><startdate>20211021</startdate><enddate>20211021</enddate><creator>Kang, Yachen</creator><creator>Liu, Jinxin</creator><creator>Cao, Xin</creator><creator>Wang, Donglin</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20211021</creationdate><title>Off-Dynamics Inverse Reinforcement Learning from Hetero-Domain</title><author>Kang, Yachen ; Liu, Jinxin ; Cao, Xin ; Wang, Donglin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_25856393753</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Control tasks</topic><topic>Domains</topic><topic>Dynamics</topic><topic>Learning</topic><topic>Simulation</topic><toplevel>online_resources</toplevel><creatorcontrib>Kang, Yachen</creatorcontrib><creatorcontrib>Liu, Jinxin</creatorcontrib><creatorcontrib>Cao, Xin</creatorcontrib><creatorcontrib>Wang, Donglin</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kang, Yachen</au><au>Liu, Jinxin</au><au>Cao, Xin</au><au>Wang, Donglin</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Off-Dynamics Inverse Reinforcement Learning from Hetero-Domain</atitle><jtitle>arXiv.org</jtitle><date>2021-10-21</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>We propose an approach for inverse reinforcement learning from hetero-domain which learns a reward function in the simulator, drawing on the demonstrations from the real world. The intuition behind the method is that the reward function should not only be oriented to imitate the experts, but should encourage actions adjusted for the dynamics difference between the simulator and the real world. To achieve this, the widely used GAN-inspired IRL method is adopted, and its discriminator, recognizing policy-generating trajectories, is modified with the quantification of dynamics difference. The training process of the discriminator can yield the transferable reward function suitable for simulator dynamics, which can be guaranteed by derivation. Effectively, our method assigns higher rewards for demonstration trajectories which do not exploit discrepancies between the two domains. With extensive experiments on continuous control tasks, our method shows its effectiveness and demonstrates its scalability to high-dimensional tasks.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2021-10
issn 2331-8422
language eng
recordid cdi_proquest_journals_2585639375
source Publicly Available Content Database
subjects Control tasks
Domains
Dynamics
Learning
Simulation
title Off-Dynamics Inverse Reinforcement Learning from Hetero-Domain
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-15T20%3A41%3A21IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Off-Dynamics%20Inverse%20Reinforcement%20Learning%20from%20Hetero-Domain&rft.jtitle=arXiv.org&rft.au=Kang,%20Yachen&rft.date=2021-10-21&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2585639375%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_25856393753%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2585639375&rft_id=info:pmid/&rfr_iscdi=true