Loading…
Offline Multitask Representation Learning for Reinforcement Learning
We study offline multitask representation learning in reinforcement learning (RL), where a learner is provided with an offline dataset from different tasks that share a common representation and is asked to learn the shared representation. We theoretically investigate offline multitask low-rank RL,...
Saved in:
Published in: | arXiv.org 2024-10 |
---|---|
Main Authors: | , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Haque Ishfaq Nguyen-Tang, Thanh Feng, Songtao Arora, Raman Wang, Mengdi Yin, Ming Precup, Doina |
description | We study offline multitask representation learning in reinforcement learning (RL), where a learner is provided with an offline dataset from different tasks that share a common representation and is asked to learn the shared representation. We theoretically investigate offline multitask low-rank RL, and propose a new algorithm called MORL for offline multitask representation learning. Furthermore, we examine downstream RL in reward-free, offline and online scenarios, where a new task is introduced to the agent that shares the same representation as the upstream offline tasks. Our theoretical results demonstrate the benefits of using the learned representation from the upstream offline task instead of directly learning the representation of the low-rank model. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2968636271</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2968636271</sourcerecordid><originalsourceid>FETCH-proquest_journals_29686362713</originalsourceid><addsrcrecordid>eNqNyr0KwjAUQOEgCBbtOwScC-1Nm9bZHxwUQdxLkBtJjTc1Sd_fDOLsdIbvzFgGQlRFVwMsWB7CUJYlyBaaRmRsd9HaGkJ-nmw0UYUnv-LoMSBFFY0jfkLlydCDa-eTGUq94yv5j1ZsrpUNmH-7ZOvD_rY9FqN37wlD7Ac3eUrUw0Z2UkhoK_Hf9QGLYjs5</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2968636271</pqid></control><display><type>article</type><title>Offline Multitask Representation Learning for Reinforcement Learning</title><source>Publicly Available Content (ProQuest)</source><creator>Haque Ishfaq ; Nguyen-Tang, Thanh ; Feng, Songtao ; Arora, Raman ; Wang, Mengdi ; Yin, Ming ; Precup, Doina</creator><creatorcontrib>Haque Ishfaq ; Nguyen-Tang, Thanh ; Feng, Songtao ; Arora, Raman ; Wang, Mengdi ; Yin, Ming ; Precup, Doina</creatorcontrib><description>We study offline multitask representation learning in reinforcement learning (RL), where a learner is provided with an offline dataset from different tasks that share a common representation and is asked to learn the shared representation. We theoretically investigate offline multitask low-rank RL, and propose a new algorithm called MORL for offline multitask representation learning. Furthermore, we examine downstream RL in reward-free, offline and online scenarios, where a new task is introduced to the agent that shares the same representation as the upstream offline tasks. Our theoretical results demonstrate the benefits of using the learned representation from the upstream offline task instead of directly learning the representation of the low-rank model.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Representations ; Upstream</subject><ispartof>arXiv.org, 2024-10</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2968636271?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>776,780,25731,36989,44566</link.rule.ids></links><search><creatorcontrib>Haque Ishfaq</creatorcontrib><creatorcontrib>Nguyen-Tang, Thanh</creatorcontrib><creatorcontrib>Feng, Songtao</creatorcontrib><creatorcontrib>Arora, Raman</creatorcontrib><creatorcontrib>Wang, Mengdi</creatorcontrib><creatorcontrib>Yin, Ming</creatorcontrib><creatorcontrib>Precup, Doina</creatorcontrib><title>Offline Multitask Representation Learning for Reinforcement Learning</title><title>arXiv.org</title><description>We study offline multitask representation learning in reinforcement learning (RL), where a learner is provided with an offline dataset from different tasks that share a common representation and is asked to learn the shared representation. We theoretically investigate offline multitask low-rank RL, and propose a new algorithm called MORL for offline multitask representation learning. Furthermore, we examine downstream RL in reward-free, offline and online scenarios, where a new task is introduced to the agent that shares the same representation as the upstream offline tasks. Our theoretical results demonstrate the benefits of using the learned representation from the upstream offline task instead of directly learning the representation of the low-rank model.</description><subject>Algorithms</subject><subject>Representations</subject><subject>Upstream</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNyr0KwjAUQOEgCBbtOwScC-1Nm9bZHxwUQdxLkBtJjTc1Sd_fDOLsdIbvzFgGQlRFVwMsWB7CUJYlyBaaRmRsd9HaGkJ-nmw0UYUnv-LoMSBFFY0jfkLlydCDa-eTGUq94yv5j1ZsrpUNmH-7ZOvD_rY9FqN37wlD7Ac3eUrUw0Z2UkhoK_Hf9QGLYjs5</recordid><startdate>20241031</startdate><enddate>20241031</enddate><creator>Haque Ishfaq</creator><creator>Nguyen-Tang, Thanh</creator><creator>Feng, Songtao</creator><creator>Arora, Raman</creator><creator>Wang, Mengdi</creator><creator>Yin, Ming</creator><creator>Precup, Doina</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241031</creationdate><title>Offline Multitask Representation Learning for Reinforcement Learning</title><author>Haque Ishfaq ; Nguyen-Tang, Thanh ; Feng, Songtao ; Arora, Raman ; Wang, Mengdi ; Yin, Ming ; Precup, Doina</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_29686362713</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Algorithms</topic><topic>Representations</topic><topic>Upstream</topic><toplevel>online_resources</toplevel><creatorcontrib>Haque Ishfaq</creatorcontrib><creatorcontrib>Nguyen-Tang, Thanh</creatorcontrib><creatorcontrib>Feng, Songtao</creatorcontrib><creatorcontrib>Arora, Raman</creatorcontrib><creatorcontrib>Wang, Mengdi</creatorcontrib><creatorcontrib>Yin, Ming</creatorcontrib><creatorcontrib>Precup, Doina</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Haque Ishfaq</au><au>Nguyen-Tang, Thanh</au><au>Feng, Songtao</au><au>Arora, Raman</au><au>Wang, Mengdi</au><au>Yin, Ming</au><au>Precup, Doina</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Offline Multitask Representation Learning for Reinforcement Learning</atitle><jtitle>arXiv.org</jtitle><date>2024-10-31</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>We study offline multitask representation learning in reinforcement learning (RL), where a learner is provided with an offline dataset from different tasks that share a common representation and is asked to learn the shared representation. We theoretically investigate offline multitask low-rank RL, and propose a new algorithm called MORL for offline multitask representation learning. Furthermore, we examine downstream RL in reward-free, offline and online scenarios, where a new task is introduced to the agent that shares the same representation as the upstream offline tasks. Our theoretical results demonstrate the benefits of using the learned representation from the upstream offline task instead of directly learning the representation of the low-rank model.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-10 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2968636271 |
source | Publicly Available Content (ProQuest) |
subjects | Algorithms Representations Upstream |
title | Offline Multitask Representation Learning for Reinforcement Learning |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-06T15%3A02%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Offline%20Multitask%20Representation%20Learning%20for%20Reinforcement%20Learning&rft.jtitle=arXiv.org&rft.au=Haque%20Ishfaq&rft.date=2024-10-31&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2968636271%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_29686362713%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2968636271&rft_id=info:pmid/&rfr_iscdi=true |