Loading…
Evaluating Domain Randomization in Deep Reinforcement Learning Locomotion Tasks
Domain randomization in the context of Reinforcement learning (RL) involves training RL agents with randomized environmental properties or parameters to improve the generalization capabilities of the resulting agents. Although domain randomization has been favorably studied in the literature, it has...
Saved in:
Published in: | Mathematics (Basel) 2023-12, Vol.11 (23), p.4744 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | cdi_FETCH-LOGICAL-c363t-2ecb2ba04ca32c57bcf668daa1228cc23935651b856b1cdfc9f05e0dcfc545f43 |
container_end_page | |
container_issue | 23 |
container_start_page | 4744 |
container_title | Mathematics (Basel) |
container_volume | 11 |
creator | Ajani, Oladayo S. Hur, Sung-ho Mallipeddi, Rammohan |
description | Domain randomization in the context of Reinforcement learning (RL) involves training RL agents with randomized environmental properties or parameters to improve the generalization capabilities of the resulting agents. Although domain randomization has been favorably studied in the literature, it has been studied in terms of varying the operational characters of the associated systems or physical dynamics rather than their environmental characteristics. This is counter-intuitive as it is unrealistic to alter the mechanical dynamics of a system in operation. Furthermore, most works were based on cherry-picked environments within different classes of RL tasks. Therefore, in this work, we investigated domain randomization by varying only the properties or parameters of the environment rather than varying the mechanical dynamics of the featured systems. Furthermore, the analysis conducted was based on all six RL locomotion tasks. In terms of training the RL agents, we employed two proven RL algorithms (SAC and TD3) and evaluated the generalization capabilities of the resulting agents on several train–test scenarios that involve both in-distribution and out-distribution evaluations as well as scenarios applicable in the real world. The results demonstrate that, although domain randomization favors generalization, some tasks only require randomization from low-dimensional distributions while others require randomization from high-dimensional randomization. Hence the question of what level of randomization is optimal for any given task becomes very important. |
doi_str_mv | 10.3390/math11234744 |
format | article |
fullrecord | <record><control><sourceid>gale_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_0c2bb5c2116f41ba9fd39f92fb1d0abc</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A775891032</galeid><doaj_id>oai_doaj_org_article_0c2bb5c2116f41ba9fd39f92fb1d0abc</doaj_id><sourcerecordid>A775891032</sourcerecordid><originalsourceid>FETCH-LOGICAL-c363t-2ecb2ba04ca32c57bcf668daa1228cc23935651b856b1cdfc9f05e0dcfc545f43</originalsourceid><addsrcrecordid>eNpNUU1LAzEUXERBUW_-gAWvtuZzd3OU1i8oCKWew8vbpKZ2k5rdCvrrTa1Ik0Mek5lheFMUV5SMOVfktoPhjVLGRS3EUXHGGKtHdf44PphPi8u-X5F8FOWNUGfFy_0nrLcw-LAsp7EDH8o5hDZ2_juDMZQZmFq7KefWBxcT2s6GoZxZSGGnmUWMXfxlLqB_7y-KEwfr3l7-vefF68P9YvI0mr08Pk_uZiPkFR9GzKJhBohA4AxlbdBVVdMCUMYaRMYVl5WkppGVodg6VI5IS1p0KIV0gp8Xz3vfNsJKb5LvIH3pCF7_AjEtNaTB49pqgswYiYzSyglqQLmWK6eYM7QlYDB7Xe-9Nil-bG0_6FXcppDja9YoJXIkRTJrvGctIZvuljEkwHxb23mMwTqf8bu6lo2ihLMsuNkLMMW-T9b9x6RE7yrTh5XxHy9Fif0</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2899422890</pqid></control><display><type>article</type><title>Evaluating Domain Randomization in Deep Reinforcement Learning Locomotion Tasks</title><source>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</source><creator>Ajani, Oladayo S. ; Hur, Sung-ho ; Mallipeddi, Rammohan</creator><creatorcontrib>Ajani, Oladayo S. ; Hur, Sung-ho ; Mallipeddi, Rammohan</creatorcontrib><description>Domain randomization in the context of Reinforcement learning (RL) involves training RL agents with randomized environmental properties or parameters to improve the generalization capabilities of the resulting agents. Although domain randomization has been favorably studied in the literature, it has been studied in terms of varying the operational characters of the associated systems or physical dynamics rather than their environmental characteristics. This is counter-intuitive as it is unrealistic to alter the mechanical dynamics of a system in operation. Furthermore, most works were based on cherry-picked environments within different classes of RL tasks. Therefore, in this work, we investigated domain randomization by varying only the properties or parameters of the environment rather than varying the mechanical dynamics of the featured systems. Furthermore, the analysis conducted was based on all six RL locomotion tasks. In terms of training the RL agents, we employed two proven RL algorithms (SAC and TD3) and evaluated the generalization capabilities of the resulting agents on several train–test scenarios that involve both in-distribution and out-distribution evaluations as well as scenarios applicable in the real world. The results demonstrate that, although domain randomization favors generalization, some tasks only require randomization from low-dimensional distributions while others require randomization from high-dimensional randomization. Hence the question of what level of randomization is optimal for any given task becomes very important.</description><identifier>ISSN: 2227-7390</identifier><identifier>EISSN: 2227-7390</identifier><identifier>DOI: 10.3390/math11234744</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>Algorithms ; Deep learning ; deep reinforcement learning ; domain randomization ; dynamic environments ; Friction ; generalization ; Locomotion ; Optimization ; Parameters ; Randomization ; Robots ; Simulation</subject><ispartof>Mathematics (Basel), 2023-12, Vol.11 (23), p.4744</ispartof><rights>COPYRIGHT 2023 MDPI AG</rights><rights>2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c363t-2ecb2ba04ca32c57bcf668daa1228cc23935651b856b1cdfc9f05e0dcfc545f43</cites><orcidid>0000-0001-5796-3375 ; 0000-0002-9263-1584 ; 0000-0001-9071-1145</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2899422890/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2899422890?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,25753,27924,27925,37012,44590,75126</link.rule.ids></links><search><creatorcontrib>Ajani, Oladayo S.</creatorcontrib><creatorcontrib>Hur, Sung-ho</creatorcontrib><creatorcontrib>Mallipeddi, Rammohan</creatorcontrib><title>Evaluating Domain Randomization in Deep Reinforcement Learning Locomotion Tasks</title><title>Mathematics (Basel)</title><description>Domain randomization in the context of Reinforcement learning (RL) involves training RL agents with randomized environmental properties or parameters to improve the generalization capabilities of the resulting agents. Although domain randomization has been favorably studied in the literature, it has been studied in terms of varying the operational characters of the associated systems or physical dynamics rather than their environmental characteristics. This is counter-intuitive as it is unrealistic to alter the mechanical dynamics of a system in operation. Furthermore, most works were based on cherry-picked environments within different classes of RL tasks. Therefore, in this work, we investigated domain randomization by varying only the properties or parameters of the environment rather than varying the mechanical dynamics of the featured systems. Furthermore, the analysis conducted was based on all six RL locomotion tasks. In terms of training the RL agents, we employed two proven RL algorithms (SAC and TD3) and evaluated the generalization capabilities of the resulting agents on several train–test scenarios that involve both in-distribution and out-distribution evaluations as well as scenarios applicable in the real world. The results demonstrate that, although domain randomization favors generalization, some tasks only require randomization from low-dimensional distributions while others require randomization from high-dimensional randomization. Hence the question of what level of randomization is optimal for any given task becomes very important.</description><subject>Algorithms</subject><subject>Deep learning</subject><subject>deep reinforcement learning</subject><subject>domain randomization</subject><subject>dynamic environments</subject><subject>Friction</subject><subject>generalization</subject><subject>Locomotion</subject><subject>Optimization</subject><subject>Parameters</subject><subject>Randomization</subject><subject>Robots</subject><subject>Simulation</subject><issn>2227-7390</issn><issn>2227-7390</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNpNUU1LAzEUXERBUW_-gAWvtuZzd3OU1i8oCKWew8vbpKZ2k5rdCvrrTa1Ik0Mek5lheFMUV5SMOVfktoPhjVLGRS3EUXHGGKtHdf44PphPi8u-X5F8FOWNUGfFy_0nrLcw-LAsp7EDH8o5hDZ2_juDMZQZmFq7KefWBxcT2s6GoZxZSGGnmUWMXfxlLqB_7y-KEwfr3l7-vefF68P9YvI0mr08Pk_uZiPkFR9GzKJhBohA4AxlbdBVVdMCUMYaRMYVl5WkppGVodg6VI5IS1p0KIV0gp8Xz3vfNsJKb5LvIH3pCF7_AjEtNaTB49pqgswYiYzSyglqQLmWK6eYM7QlYDB7Xe-9Nil-bG0_6FXcppDja9YoJXIkRTJrvGctIZvuljEkwHxb23mMwTqf8bu6lo2ihLMsuNkLMMW-T9b9x6RE7yrTh5XxHy9Fif0</recordid><startdate>20231201</startdate><enddate>20231201</enddate><creator>Ajani, Oladayo S.</creator><creator>Hur, Sung-ho</creator><creator>Mallipeddi, Rammohan</creator><general>MDPI AG</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7TB</scope><scope>7XB</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FR3</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>KR7</scope><scope>L6V</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0N</scope><scope>M7S</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PTHSS</scope><scope>Q9U</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0001-5796-3375</orcidid><orcidid>https://orcid.org/0000-0002-9263-1584</orcidid><orcidid>https://orcid.org/0000-0001-9071-1145</orcidid></search><sort><creationdate>20231201</creationdate><title>Evaluating Domain Randomization in Deep Reinforcement Learning Locomotion Tasks</title><author>Ajani, Oladayo S. ; Hur, Sung-ho ; Mallipeddi, Rammohan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c363t-2ecb2ba04ca32c57bcf668daa1228cc23935651b856b1cdfc9f05e0dcfc545f43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Deep learning</topic><topic>deep reinforcement learning</topic><topic>domain randomization</topic><topic>dynamic environments</topic><topic>Friction</topic><topic>generalization</topic><topic>Locomotion</topic><topic>Optimization</topic><topic>Parameters</topic><topic>Randomization</topic><topic>Robots</topic><topic>Simulation</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ajani, Oladayo S.</creatorcontrib><creatorcontrib>Hur, Sung-ho</creatorcontrib><creatorcontrib>Mallipeddi, Rammohan</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Engineering Research Database</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>Civil Engineering Abstracts</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Computing Database</collection><collection>Engineering Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>Engineering Collection</collection><collection>ProQuest Central Basic</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>Mathematics (Basel)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ajani, Oladayo S.</au><au>Hur, Sung-ho</au><au>Mallipeddi, Rammohan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Evaluating Domain Randomization in Deep Reinforcement Learning Locomotion Tasks</atitle><jtitle>Mathematics (Basel)</jtitle><date>2023-12-01</date><risdate>2023</risdate><volume>11</volume><issue>23</issue><spage>4744</spage><pages>4744-</pages><issn>2227-7390</issn><eissn>2227-7390</eissn><abstract>Domain randomization in the context of Reinforcement learning (RL) involves training RL agents with randomized environmental properties or parameters to improve the generalization capabilities of the resulting agents. Although domain randomization has been favorably studied in the literature, it has been studied in terms of varying the operational characters of the associated systems or physical dynamics rather than their environmental characteristics. This is counter-intuitive as it is unrealistic to alter the mechanical dynamics of a system in operation. Furthermore, most works were based on cherry-picked environments within different classes of RL tasks. Therefore, in this work, we investigated domain randomization by varying only the properties or parameters of the environment rather than varying the mechanical dynamics of the featured systems. Furthermore, the analysis conducted was based on all six RL locomotion tasks. In terms of training the RL agents, we employed two proven RL algorithms (SAC and TD3) and evaluated the generalization capabilities of the resulting agents on several train–test scenarios that involve both in-distribution and out-distribution evaluations as well as scenarios applicable in the real world. The results demonstrate that, although domain randomization favors generalization, some tasks only require randomization from low-dimensional distributions while others require randomization from high-dimensional randomization. Hence the question of what level of randomization is optimal for any given task becomes very important.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/math11234744</doi><orcidid>https://orcid.org/0000-0001-5796-3375</orcidid><orcidid>https://orcid.org/0000-0002-9263-1584</orcidid><orcidid>https://orcid.org/0000-0001-9071-1145</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2227-7390 |
ispartof | Mathematics (Basel), 2023-12, Vol.11 (23), p.4744 |
issn | 2227-7390 2227-7390 |
language | eng |
recordid | cdi_doaj_primary_oai_doaj_org_article_0c2bb5c2116f41ba9fd39f92fb1d0abc |
source | Publicly Available Content Database (Proquest) (PQ_SDU_P3) |
subjects | Algorithms Deep learning deep reinforcement learning domain randomization dynamic environments Friction generalization Locomotion Optimization Parameters Randomization Robots Simulation |
title | Evaluating Domain Randomization in Deep Reinforcement Learning Locomotion Tasks |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T11%3A00%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Evaluating%20Domain%20Randomization%20in%20Deep%20Reinforcement%20Learning%20Locomotion%20Tasks&rft.jtitle=Mathematics%20(Basel)&rft.au=Ajani,%20Oladayo%20S.&rft.date=2023-12-01&rft.volume=11&rft.issue=23&rft.spage=4744&rft.pages=4744-&rft.issn=2227-7390&rft.eissn=2227-7390&rft_id=info:doi/10.3390/math11234744&rft_dat=%3Cgale_doaj_%3EA775891032%3C/gale_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c363t-2ecb2ba04ca32c57bcf668daa1228cc23935651b856b1cdfc9f05e0dcfc545f43%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2899422890&rft_id=info:pmid/&rft_galeid=A775891032&rfr_iscdi=true |