Loading…
Foundations of Multivariate Distributional Reinforcement Learning
In reinforcement learning (RL), the consideration of multivariate reward signals has led to fundamental advancements in multi-objective decision-making, transfer learning, and representation learning. This work introduces the first oracle-free and computationally-tractable algorithms for provably co...
Saved in:
Published in: | arXiv.org 2024-08 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Harley Wiltzer Farebrother, Jesse Gretton, Arthur Rowland, Mark |
description | In reinforcement learning (RL), the consideration of multivariate reward signals has led to fundamental advancements in multi-objective decision-making, transfer learning, and representation learning. This work introduces the first oracle-free and computationally-tractable algorithms for provably convergent multivariate distributional dynamic programming and temporal difference learning. Our convergence rates match the familiar rates in the scalar reward setting, and additionally provide new insights into the fidelity of approximate return distribution representations as a function of the reward dimension. Surprisingly, when the reward dimension is larger than \(1\), we show that standard analysis of categorical TD learning fails, which we resolve with a novel projection onto the space of mass-\(1\) signed measures. Finally, with the aid of our technical results and simulations, we identify tradeoffs between distribution representations that influence the performance of multivariate distributional RL in practice. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3100998070</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3100998070</sourcerecordid><originalsourceid>FETCH-proquest_journals_31009980703</originalsourceid><addsrcrecordid>eNqNi0EKwjAQAIMgWLR_CHgubBNr26OoxYNexHuJupUtNdFs4vtV8AGe5jAzI5EorfOsWig1ESlzDwBqWaqi0IlYNS7aqwnkLEvXyUMcAr2MJxNQboiDp3P8WjPII5LtnL_gHW2QezTekr3NxLgzA2P641TMm-1pvcse3j0jcmh7F_3n51bnAHVdQQn6v-oN7Ys6VA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3100998070</pqid></control><display><type>article</type><title>Foundations of Multivariate Distributional Reinforcement Learning</title><source>Publicly Available Content (ProQuest)</source><creator>Harley Wiltzer ; Farebrother, Jesse ; Gretton, Arthur ; Rowland, Mark</creator><creatorcontrib>Harley Wiltzer ; Farebrother, Jesse ; Gretton, Arthur ; Rowland, Mark</creatorcontrib><description>In reinforcement learning (RL), the consideration of multivariate reward signals has led to fundamental advancements in multi-objective decision-making, transfer learning, and representation learning. This work introduces the first oracle-free and computationally-tractable algorithms for provably convergent multivariate distributional dynamic programming and temporal difference learning. Our convergence rates match the familiar rates in the scalar reward setting, and additionally provide new insights into the fidelity of approximate return distribution representations as a function of the reward dimension. Surprisingly, when the reward dimension is larger than \(1\), we show that standard analysis of categorical TD learning fails, which we resolve with a novel projection onto the space of mass-\(1\) signed measures. Finally, with the aid of our technical results and simulations, we identify tradeoffs between distribution representations that influence the performance of multivariate distributional RL in practice.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Dynamic programming ; Machine learning ; Multivariate analysis ; Representations</subject><ispartof>arXiv.org, 2024-08</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/3100998070?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Harley Wiltzer</creatorcontrib><creatorcontrib>Farebrother, Jesse</creatorcontrib><creatorcontrib>Gretton, Arthur</creatorcontrib><creatorcontrib>Rowland, Mark</creatorcontrib><title>Foundations of Multivariate Distributional Reinforcement Learning</title><title>arXiv.org</title><description>In reinforcement learning (RL), the consideration of multivariate reward signals has led to fundamental advancements in multi-objective decision-making, transfer learning, and representation learning. This work introduces the first oracle-free and computationally-tractable algorithms for provably convergent multivariate distributional dynamic programming and temporal difference learning. Our convergence rates match the familiar rates in the scalar reward setting, and additionally provide new insights into the fidelity of approximate return distribution representations as a function of the reward dimension. Surprisingly, when the reward dimension is larger than \(1\), we show that standard analysis of categorical TD learning fails, which we resolve with a novel projection onto the space of mass-\(1\) signed measures. Finally, with the aid of our technical results and simulations, we identify tradeoffs between distribution representations that influence the performance of multivariate distributional RL in practice.</description><subject>Algorithms</subject><subject>Dynamic programming</subject><subject>Machine learning</subject><subject>Multivariate analysis</subject><subject>Representations</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNi0EKwjAQAIMgWLR_CHgubBNr26OoxYNexHuJupUtNdFs4vtV8AGe5jAzI5EorfOsWig1ESlzDwBqWaqi0IlYNS7aqwnkLEvXyUMcAr2MJxNQboiDp3P8WjPII5LtnL_gHW2QezTekr3NxLgzA2P641TMm-1pvcse3j0jcmh7F_3n51bnAHVdQQn6v-oN7Ys6VA</recordid><startdate>20240831</startdate><enddate>20240831</enddate><creator>Harley Wiltzer</creator><creator>Farebrother, Jesse</creator><creator>Gretton, Arthur</creator><creator>Rowland, Mark</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240831</creationdate><title>Foundations of Multivariate Distributional Reinforcement Learning</title><author>Harley Wiltzer ; Farebrother, Jesse ; Gretton, Arthur ; Rowland, Mark</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31009980703</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Algorithms</topic><topic>Dynamic programming</topic><topic>Machine learning</topic><topic>Multivariate analysis</topic><topic>Representations</topic><toplevel>online_resources</toplevel><creatorcontrib>Harley Wiltzer</creatorcontrib><creatorcontrib>Farebrother, Jesse</creatorcontrib><creatorcontrib>Gretton, Arthur</creatorcontrib><creatorcontrib>Rowland, Mark</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Harley Wiltzer</au><au>Farebrother, Jesse</au><au>Gretton, Arthur</au><au>Rowland, Mark</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Foundations of Multivariate Distributional Reinforcement Learning</atitle><jtitle>arXiv.org</jtitle><date>2024-08-31</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>In reinforcement learning (RL), the consideration of multivariate reward signals has led to fundamental advancements in multi-objective decision-making, transfer learning, and representation learning. This work introduces the first oracle-free and computationally-tractable algorithms for provably convergent multivariate distributional dynamic programming and temporal difference learning. Our convergence rates match the familiar rates in the scalar reward setting, and additionally provide new insights into the fidelity of approximate return distribution representations as a function of the reward dimension. Surprisingly, when the reward dimension is larger than \(1\), we show that standard analysis of categorical TD learning fails, which we resolve with a novel projection onto the space of mass-\(1\) signed measures. Finally, with the aid of our technical results and simulations, we identify tradeoffs between distribution representations that influence the performance of multivariate distributional RL in practice.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-08 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3100998070 |
source | Publicly Available Content (ProQuest) |
subjects | Algorithms Dynamic programming Machine learning Multivariate analysis Representations |
title | Foundations of Multivariate Distributional Reinforcement Learning |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-03T22%3A43%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Foundations%20of%20Multivariate%20Distributional%20Reinforcement%20Learning&rft.jtitle=arXiv.org&rft.au=Harley%20Wiltzer&rft.date=2024-08-31&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3100998070%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_31009980703%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3100998070&rft_id=info:pmid/&rfr_iscdi=true |