Loading…

Offline Reinforcement Learning as Anti-exploration

Offline Reinforcement Learning (RL) aims at learning an optimal control from a fixed dataset, without interactions with the system. An agent in this setting should avoid selecting actions whose consequences cannot be predicted from the data. This is the converse of exploration in RL, which favors su...

Full description

Saved in:
Bibliographic Details
Main Authors: Rezaeifar, Shideh, Dadashi, Robert, Vieillard, Nino, Hussenot, Léonard, Bachem, Olivier, Pietquin, Olivier, Geist, Matthieu
Format: Conference Proceeding
Language:English
Citations: Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c173t-7e9d3b0bb300069fd34901f8d7c655a8d359ecda69385e5bbba163962d3b05f43
cites
container_end_page 8114
container_issue 7
container_start_page 8106
container_title
container_volume 36
creator Rezaeifar, Shideh
Dadashi, Robert
Vieillard, Nino
Hussenot, Léonard
Bachem, Olivier
Pietquin, Olivier
Geist, Matthieu
description Offline Reinforcement Learning (RL) aims at learning an optimal control from a fixed dataset, without interactions with the system. An agent in this setting should avoid selecting actions whose consequences cannot be predicted from the data. This is the converse of exploration in RL, which favors such actions. We thus take inspiration from the literature on bonus-based exploration to design a new offline RL agent. The core idea is to subtract a prediction-based exploration bonus from the reward, instead of adding it for exploration. This allows the policy to stay close to the support of the dataset and practically extends some previous pessimism-based offline RL methods to a deep learning setting with arbitrary bonuses. We also connect this approach to a more common regularization of the learned policy towards the data. Instantiated with a bonus based on the prediction error of a variational autoencoder, we show that our simple agent is competitive with the state of the art on a set of continuous control locomotion and manipulation tasks.
doi_str_mv 10.1609/aaai.v36i7.20783
format conference_proceeding
fullrecord <record><control><sourceid>crossref</sourceid><recordid>TN_cdi_crossref_primary_10_1609_aaai_v36i7_20783</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_1609_aaai_v36i7_20783</sourcerecordid><originalsourceid>FETCH-LOGICAL-c173t-7e9d3b0bb300069fd34901f8d7c655a8d359ecda69385e5bbba163962d3b05f43</originalsourceid><addsrcrecordid>eNotj8tqwzAUREVpoCHNvkv_gFzJ15KsZQh9gSEQmrW4sqWi4shBMqX9-9ppZzOzmRkOIQ-clVwy_YiIofwCGVRZMdXADVlXoGoKtWxu58yFpgK0viPbnD_ZrFpzztWaVAfvhxBdcXQh-jF17uziVLQOUwzxo8Bc7OIUqPu-DGPCKYzxnqw8Dtlt_31DTs9P7_tX2h5e3va7lnZcwUSV0z1YZi3Md1L7HmrNuG961UkhsOlBaNf1KDU0wglrLXIJWlZLS_gaNoT97XZpzDk5by4pnDH9GM7Mgm0WbHPFNlds-AXIzkwO</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Offline Reinforcement Learning as Anti-exploration</title><source>Freely Accessible Journals</source><creator>Rezaeifar, Shideh ; Dadashi, Robert ; Vieillard, Nino ; Hussenot, Léonard ; Bachem, Olivier ; Pietquin, Olivier ; Geist, Matthieu</creator><creatorcontrib>Rezaeifar, Shideh ; Dadashi, Robert ; Vieillard, Nino ; Hussenot, Léonard ; Bachem, Olivier ; Pietquin, Olivier ; Geist, Matthieu</creatorcontrib><description>Offline Reinforcement Learning (RL) aims at learning an optimal control from a fixed dataset, without interactions with the system. An agent in this setting should avoid selecting actions whose consequences cannot be predicted from the data. This is the converse of exploration in RL, which favors such actions. We thus take inspiration from the literature on bonus-based exploration to design a new offline RL agent. The core idea is to subtract a prediction-based exploration bonus from the reward, instead of adding it for exploration. This allows the policy to stay close to the support of the dataset and practically extends some previous pessimism-based offline RL methods to a deep learning setting with arbitrary bonuses. We also connect this approach to a more common regularization of the learned policy towards the data. Instantiated with a bonus based on the prediction error of a variational autoencoder, we show that our simple agent is competitive with the state of the art on a set of continuous control locomotion and manipulation tasks.</description><identifier>ISSN: 2159-5399</identifier><identifier>EISSN: 2374-3468</identifier><identifier>DOI: 10.1609/aaai.v36i7.20783</identifier><language>eng</language><ispartof>Proceedings of the ... AAAI Conference on Artificial Intelligence, 2022, Vol.36 (7), p.8106-8114</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c173t-7e9d3b0bb300069fd34901f8d7c655a8d359ecda69385e5bbba163962d3b05f43</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27923,27924</link.rule.ids></links><search><creatorcontrib>Rezaeifar, Shideh</creatorcontrib><creatorcontrib>Dadashi, Robert</creatorcontrib><creatorcontrib>Vieillard, Nino</creatorcontrib><creatorcontrib>Hussenot, Léonard</creatorcontrib><creatorcontrib>Bachem, Olivier</creatorcontrib><creatorcontrib>Pietquin, Olivier</creatorcontrib><creatorcontrib>Geist, Matthieu</creatorcontrib><title>Offline Reinforcement Learning as Anti-exploration</title><title>Proceedings of the ... AAAI Conference on Artificial Intelligence</title><description>Offline Reinforcement Learning (RL) aims at learning an optimal control from a fixed dataset, without interactions with the system. An agent in this setting should avoid selecting actions whose consequences cannot be predicted from the data. This is the converse of exploration in RL, which favors such actions. We thus take inspiration from the literature on bonus-based exploration to design a new offline RL agent. The core idea is to subtract a prediction-based exploration bonus from the reward, instead of adding it for exploration. This allows the policy to stay close to the support of the dataset and practically extends some previous pessimism-based offline RL methods to a deep learning setting with arbitrary bonuses. We also connect this approach to a more common regularization of the learned policy towards the data. Instantiated with a bonus based on the prediction error of a variational autoencoder, we show that our simple agent is competitive with the state of the art on a set of continuous control locomotion and manipulation tasks.</description><issn>2159-5399</issn><issn>2374-3468</issn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2022</creationdate><recordtype>conference_proceeding</recordtype><recordid>eNotj8tqwzAUREVpoCHNvkv_gFzJ15KsZQh9gSEQmrW4sqWi4shBMqX9-9ppZzOzmRkOIQ-clVwy_YiIofwCGVRZMdXADVlXoGoKtWxu58yFpgK0viPbnD_ZrFpzztWaVAfvhxBdcXQh-jF17uziVLQOUwzxo8Bc7OIUqPu-DGPCKYzxnqw8Dtlt_31DTs9P7_tX2h5e3va7lnZcwUSV0z1YZi3Md1L7HmrNuG961UkhsOlBaNf1KDU0wglrLXIJWlZLS_gaNoT97XZpzDk5by4pnDH9GM7Mgm0WbHPFNlds-AXIzkwO</recordid><startdate>20220628</startdate><enddate>20220628</enddate><creator>Rezaeifar, Shideh</creator><creator>Dadashi, Robert</creator><creator>Vieillard, Nino</creator><creator>Hussenot, Léonard</creator><creator>Bachem, Olivier</creator><creator>Pietquin, Olivier</creator><creator>Geist, Matthieu</creator><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20220628</creationdate><title>Offline Reinforcement Learning as Anti-exploration</title><author>Rezaeifar, Shideh ; Dadashi, Robert ; Vieillard, Nino ; Hussenot, Léonard ; Bachem, Olivier ; Pietquin, Olivier ; Geist, Matthieu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c173t-7e9d3b0bb300069fd34901f8d7c655a8d359ecda69385e5bbba163962d3b05f43</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2022</creationdate><toplevel>online_resources</toplevel><creatorcontrib>Rezaeifar, Shideh</creatorcontrib><creatorcontrib>Dadashi, Robert</creatorcontrib><creatorcontrib>Vieillard, Nino</creatorcontrib><creatorcontrib>Hussenot, Léonard</creatorcontrib><creatorcontrib>Bachem, Olivier</creatorcontrib><creatorcontrib>Pietquin, Olivier</creatorcontrib><creatorcontrib>Geist, Matthieu</creatorcontrib><collection>CrossRef</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Rezaeifar, Shideh</au><au>Dadashi, Robert</au><au>Vieillard, Nino</au><au>Hussenot, Léonard</au><au>Bachem, Olivier</au><au>Pietquin, Olivier</au><au>Geist, Matthieu</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Offline Reinforcement Learning as Anti-exploration</atitle><btitle>Proceedings of the ... AAAI Conference on Artificial Intelligence</btitle><date>2022-06-28</date><risdate>2022</risdate><volume>36</volume><issue>7</issue><spage>8106</spage><epage>8114</epage><pages>8106-8114</pages><issn>2159-5399</issn><eissn>2374-3468</eissn><abstract>Offline Reinforcement Learning (RL) aims at learning an optimal control from a fixed dataset, without interactions with the system. An agent in this setting should avoid selecting actions whose consequences cannot be predicted from the data. This is the converse of exploration in RL, which favors such actions. We thus take inspiration from the literature on bonus-based exploration to design a new offline RL agent. The core idea is to subtract a prediction-based exploration bonus from the reward, instead of adding it for exploration. This allows the policy to stay close to the support of the dataset and practically extends some previous pessimism-based offline RL methods to a deep learning setting with arbitrary bonuses. We also connect this approach to a more common regularization of the learned policy towards the data. Instantiated with a bonus based on the prediction error of a variational autoencoder, we show that our simple agent is competitive with the state of the art on a set of continuous control locomotion and manipulation tasks.</abstract><doi>10.1609/aaai.v36i7.20783</doi><tpages>9</tpages></addata></record>
fulltext fulltext
identifier ISSN: 2159-5399
ispartof Proceedings of the ... AAAI Conference on Artificial Intelligence, 2022, Vol.36 (7), p.8106-8114
issn 2159-5399
2374-3468
language eng
recordid cdi_crossref_primary_10_1609_aaai_v36i7_20783
source Freely Accessible Journals
title Offline Reinforcement Learning as Anti-exploration
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T03%3A43%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Offline%20Reinforcement%20Learning%20as%20Anti-exploration&rft.btitle=Proceedings%20of%20the%20...%20AAAI%20Conference%20on%20Artificial%20Intelligence&rft.au=Rezaeifar,%20Shideh&rft.date=2022-06-28&rft.volume=36&rft.issue=7&rft.spage=8106&rft.epage=8114&rft.pages=8106-8114&rft.issn=2159-5399&rft.eissn=2374-3468&rft_id=info:doi/10.1609/aaai.v36i7.20783&rft_dat=%3Ccrossref%3E10_1609_aaai_v36i7_20783%3C/crossref%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c173t-7e9d3b0bb300069fd34901f8d7c655a8d359ecda69385e5bbba163962d3b05f43%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true