Loading…
Conformal Symplectic Optimization for Stable Reinforcement Learning
Training deep reinforcement learning (RL) agents necessitates overcoming the highly unstable nonconvex stochastic optimization inherent in the trial-and-error mechanism. To tackle this challenge, we propose a physics-inspired optimization algorithm called relativistic adaptive gradient descent (RAD)...
Saved in:
Published in: | IEEE transaction on neural networks and learning systems 2024-12, p.1-15 |
---|---|
Main Authors: | , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | 15 |
container_issue | |
container_start_page | 1 |
container_title | IEEE transaction on neural networks and learning systems |
container_volume | |
creator | Lyu, Yao Zhang, Xiangteng Li, Shengbo Eben Duan, Jingliang Tao, Letian Xu, Qing He, Lei Li, Keqiang |
description | Training deep reinforcement learning (RL) agents necessitates overcoming the highly unstable nonconvex stochastic optimization inherent in the trial-and-error mechanism. To tackle this challenge, we propose a physics-inspired optimization algorithm called relativistic adaptive gradient descent (RAD), which enhances long-term training stability. By conceptualizing neural network (NN) training as the evolution of a conformal Hamiltonian system, we present a universal framework for transferring long-term stability from conformal symplectic integrators to iterative NN updating rules, where the choice of kinetic energy governs the dynamical properties of resulting optimization algorithms. By utilizing relativistic kinetic energy, RAD incorporates principles from special relativity and limits parameter updates below a finite speed, effectively mitigating abnormal gradient influences. In addition, RAD models NN optimization as the evolution of a multiparticle system where each trainable parameter acts as an independent particle with an individual adaptive learning rate. We prove RAD's sublinear convergence under general nonconvex settings, where smaller gradient variance and larger batch sizes contribute to tighter convergence. Notably, RAD degrades to the well-known adaptive moment estimation (ADAM) algorithm when its speed coefficient is chosen as one and symplectic factor as a small positive value. Experimental results show RAD outperforming nine baseline optimizers with five RL algorithms across twelve environments, including standard benchmarks and challenging scenarios. Notably, RAD achieves up to a 155.1% performance improvement over ADAM in Atari games, showcasing its efficacy in stabilizing and accelerating RL training. |
doi_str_mv | 10.1109/TNNLS.2024.3511670 |
format | article |
fullrecord | <record><control><sourceid>ieee</sourceid><recordid>TN_cdi_ieee_primary_10792938</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10792938</ieee_id><sourcerecordid>10792938</sourcerecordid><originalsourceid>FETCH-LOGICAL-i650-b2923e59d960a1e74c5839ed436b7682347ca5cfecfcd6d2ca774523bb4620a63</originalsourceid><addsrcrecordid>eNotjL1qwzAURjW00JDmBUoGvYBd6Uq6ssZi-gcmgdpDtyDL10HFf9he0qdvSvstZziHj7EHKVIphXusDoeiTEGATpWREq24YRuQCAko-3nHdsvyJa5DYVC7DcvzcWjHufcdLy_91FFYY-DHaY19_PZrHAd-1bxcfd0R_6D4WwfqaVh5QX4e4nC-Z7et7xba_XPLqpfnKn9LiuPre_5UJBGNSGpwoMi4xqHwkqwOJlOOGq2wtpiB0jZ4E1oKbWiwgeCt1QZUXWsE4VFt2f7vNhLRaZpj7-fLSQrrwKlM_QBji0lI</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Conformal Symplectic Optimization for Stable Reinforcement Learning</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Lyu, Yao ; Zhang, Xiangteng ; Li, Shengbo Eben ; Duan, Jingliang ; Tao, Letian ; Xu, Qing ; He, Lei ; Li, Keqiang</creator><creatorcontrib>Lyu, Yao ; Zhang, Xiangteng ; Li, Shengbo Eben ; Duan, Jingliang ; Tao, Letian ; Xu, Qing ; He, Lei ; Li, Keqiang</creatorcontrib><description>Training deep reinforcement learning (RL) agents necessitates overcoming the highly unstable nonconvex stochastic optimization inherent in the trial-and-error mechanism. To tackle this challenge, we propose a physics-inspired optimization algorithm called relativistic adaptive gradient descent (RAD), which enhances long-term training stability. By conceptualizing neural network (NN) training as the evolution of a conformal Hamiltonian system, we present a universal framework for transferring long-term stability from conformal symplectic integrators to iterative NN updating rules, where the choice of kinetic energy governs the dynamical properties of resulting optimization algorithms. By utilizing relativistic kinetic energy, RAD incorporates principles from special relativity and limits parameter updates below a finite speed, effectively mitigating abnormal gradient influences. In addition, RAD models NN optimization as the evolution of a multiparticle system where each trainable parameter acts as an independent particle with an individual adaptive learning rate. We prove RAD's sublinear convergence under general nonconvex settings, where smaller gradient variance and larger batch sizes contribute to tighter convergence. Notably, RAD degrades to the well-known adaptive moment estimation (ADAM) algorithm when its speed coefficient is chosen as one and symplectic factor as a small positive value. Experimental results show RAD outperforming nine baseline optimizers with five RL algorithms across twelve environments, including standard benchmarks and challenging scenarios. Notably, RAD achieves up to a 155.1% performance improvement over ADAM in Atari games, showcasing its efficacy in stabilizing and accelerating RL training.</description><identifier>ISSN: 2162-237X</identifier><identifier>DOI: 10.1109/TNNLS.2024.3511670</identifier><identifier>CODEN: ITNNAL</identifier><language>eng</language><publisher>IEEE</publisher><subject>Artificial neural networks ; Conformal Hamiltonian ; Convergence ; Dynamical systems ; Heuristic algorithms ; Kinetic energy ; nonconvex stochastic optimization ; Optimization ; reinforcement learning (RL) ; Stability criteria ; Stochastic processes ; symplectic preservation ; Thermal stability ; Training ; training stability</subject><ispartof>IEEE transaction on neural networks and learning systems, 2024-12, p.1-15</ispartof><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><orcidid>duanjl@ustb.edu.cn ; helei2023@tsinghua.edu.cn ; y-lv19@mails.tsinghua.edu.cn ; lisb04@gmail.com ; likq@tsinghua.edu.cn ; zhangxt22@mails.tsinghua.edu.cn ; qingxu@tsinghua.edu.cn</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10792938$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids></links><search><creatorcontrib>Lyu, Yao</creatorcontrib><creatorcontrib>Zhang, Xiangteng</creatorcontrib><creatorcontrib>Li, Shengbo Eben</creatorcontrib><creatorcontrib>Duan, Jingliang</creatorcontrib><creatorcontrib>Tao, Letian</creatorcontrib><creatorcontrib>Xu, Qing</creatorcontrib><creatorcontrib>He, Lei</creatorcontrib><creatorcontrib>Li, Keqiang</creatorcontrib><title>Conformal Symplectic Optimization for Stable Reinforcement Learning</title><title>IEEE transaction on neural networks and learning systems</title><addtitle>TNNLS</addtitle><description>Training deep reinforcement learning (RL) agents necessitates overcoming the highly unstable nonconvex stochastic optimization inherent in the trial-and-error mechanism. To tackle this challenge, we propose a physics-inspired optimization algorithm called relativistic adaptive gradient descent (RAD), which enhances long-term training stability. By conceptualizing neural network (NN) training as the evolution of a conformal Hamiltonian system, we present a universal framework for transferring long-term stability from conformal symplectic integrators to iterative NN updating rules, where the choice of kinetic energy governs the dynamical properties of resulting optimization algorithms. By utilizing relativistic kinetic energy, RAD incorporates principles from special relativity and limits parameter updates below a finite speed, effectively mitigating abnormal gradient influences. In addition, RAD models NN optimization as the evolution of a multiparticle system where each trainable parameter acts as an independent particle with an individual adaptive learning rate. We prove RAD's sublinear convergence under general nonconvex settings, where smaller gradient variance and larger batch sizes contribute to tighter convergence. Notably, RAD degrades to the well-known adaptive moment estimation (ADAM) algorithm when its speed coefficient is chosen as one and symplectic factor as a small positive value. Experimental results show RAD outperforming nine baseline optimizers with five RL algorithms across twelve environments, including standard benchmarks and challenging scenarios. Notably, RAD achieves up to a 155.1% performance improvement over ADAM in Atari games, showcasing its efficacy in stabilizing and accelerating RL training.</description><subject>Artificial neural networks</subject><subject>Conformal Hamiltonian</subject><subject>Convergence</subject><subject>Dynamical systems</subject><subject>Heuristic algorithms</subject><subject>Kinetic energy</subject><subject>nonconvex stochastic optimization</subject><subject>Optimization</subject><subject>reinforcement learning (RL)</subject><subject>Stability criteria</subject><subject>Stochastic processes</subject><subject>symplectic preservation</subject><subject>Thermal stability</subject><subject>Training</subject><subject>training stability</subject><issn>2162-237X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNotjL1qwzAURjW00JDmBUoGvYBd6Uq6ssZi-gcmgdpDtyDL10HFf9he0qdvSvstZziHj7EHKVIphXusDoeiTEGATpWREq24YRuQCAko-3nHdsvyJa5DYVC7DcvzcWjHufcdLy_91FFYY-DHaY19_PZrHAd-1bxcfd0R_6D4WwfqaVh5QX4e4nC-Z7et7xba_XPLqpfnKn9LiuPre_5UJBGNSGpwoMi4xqHwkqwOJlOOGq2wtpiB0jZ4E1oKbWiwgeCt1QZUXWsE4VFt2f7vNhLRaZpj7-fLSQrrwKlM_QBji0lI</recordid><startdate>20241210</startdate><enddate>20241210</enddate><creator>Lyu, Yao</creator><creator>Zhang, Xiangteng</creator><creator>Li, Shengbo Eben</creator><creator>Duan, Jingliang</creator><creator>Tao, Letian</creator><creator>Xu, Qing</creator><creator>He, Lei</creator><creator>Li, Keqiang</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><orcidid>https://orcid.org/duanjl@ustb.edu.cn</orcidid><orcidid>https://orcid.org/helei2023@tsinghua.edu.cn</orcidid><orcidid>https://orcid.org/y-lv19@mails.tsinghua.edu.cn</orcidid><orcidid>https://orcid.org/lisb04@gmail.com</orcidid><orcidid>https://orcid.org/likq@tsinghua.edu.cn</orcidid><orcidid>https://orcid.org/zhangxt22@mails.tsinghua.edu.cn</orcidid><orcidid>https://orcid.org/qingxu@tsinghua.edu.cn</orcidid></search><sort><creationdate>20241210</creationdate><title>Conformal Symplectic Optimization for Stable Reinforcement Learning</title><author>Lyu, Yao ; Zhang, Xiangteng ; Li, Shengbo Eben ; Duan, Jingliang ; Tao, Letian ; Xu, Qing ; He, Lei ; Li, Keqiang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i650-b2923e59d960a1e74c5839ed436b7682347ca5cfecfcd6d2ca774523bb4620a63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Artificial neural networks</topic><topic>Conformal Hamiltonian</topic><topic>Convergence</topic><topic>Dynamical systems</topic><topic>Heuristic algorithms</topic><topic>Kinetic energy</topic><topic>nonconvex stochastic optimization</topic><topic>Optimization</topic><topic>reinforcement learning (RL)</topic><topic>Stability criteria</topic><topic>Stochastic processes</topic><topic>symplectic preservation</topic><topic>Thermal stability</topic><topic>Training</topic><topic>training stability</topic><toplevel>online_resources</toplevel><creatorcontrib>Lyu, Yao</creatorcontrib><creatorcontrib>Zhang, Xiangteng</creatorcontrib><creatorcontrib>Li, Shengbo Eben</creatorcontrib><creatorcontrib>Duan, Jingliang</creatorcontrib><creatorcontrib>Tao, Letian</creatorcontrib><creatorcontrib>Xu, Qing</creatorcontrib><creatorcontrib>He, Lei</creatorcontrib><creatorcontrib>Li, Keqiang</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998–Present</collection><collection>IEEE Electronic Library (IEL)</collection><jtitle>IEEE transaction on neural networks and learning systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Lyu, Yao</au><au>Zhang, Xiangteng</au><au>Li, Shengbo Eben</au><au>Duan, Jingliang</au><au>Tao, Letian</au><au>Xu, Qing</au><au>He, Lei</au><au>Li, Keqiang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Conformal Symplectic Optimization for Stable Reinforcement Learning</atitle><jtitle>IEEE transaction on neural networks and learning systems</jtitle><stitle>TNNLS</stitle><date>2024-12-10</date><risdate>2024</risdate><spage>1</spage><epage>15</epage><pages>1-15</pages><issn>2162-237X</issn><coden>ITNNAL</coden><abstract>Training deep reinforcement learning (RL) agents necessitates overcoming the highly unstable nonconvex stochastic optimization inherent in the trial-and-error mechanism. To tackle this challenge, we propose a physics-inspired optimization algorithm called relativistic adaptive gradient descent (RAD), which enhances long-term training stability. By conceptualizing neural network (NN) training as the evolution of a conformal Hamiltonian system, we present a universal framework for transferring long-term stability from conformal symplectic integrators to iterative NN updating rules, where the choice of kinetic energy governs the dynamical properties of resulting optimization algorithms. By utilizing relativistic kinetic energy, RAD incorporates principles from special relativity and limits parameter updates below a finite speed, effectively mitigating abnormal gradient influences. In addition, RAD models NN optimization as the evolution of a multiparticle system where each trainable parameter acts as an independent particle with an individual adaptive learning rate. We prove RAD's sublinear convergence under general nonconvex settings, where smaller gradient variance and larger batch sizes contribute to tighter convergence. Notably, RAD degrades to the well-known adaptive moment estimation (ADAM) algorithm when its speed coefficient is chosen as one and symplectic factor as a small positive value. Experimental results show RAD outperforming nine baseline optimizers with five RL algorithms across twelve environments, including standard benchmarks and challenging scenarios. Notably, RAD achieves up to a 155.1% performance improvement over ADAM in Atari games, showcasing its efficacy in stabilizing and accelerating RL training.</abstract><pub>IEEE</pub><doi>10.1109/TNNLS.2024.3511670</doi><tpages>15</tpages><orcidid>https://orcid.org/duanjl@ustb.edu.cn</orcidid><orcidid>https://orcid.org/helei2023@tsinghua.edu.cn</orcidid><orcidid>https://orcid.org/y-lv19@mails.tsinghua.edu.cn</orcidid><orcidid>https://orcid.org/lisb04@gmail.com</orcidid><orcidid>https://orcid.org/likq@tsinghua.edu.cn</orcidid><orcidid>https://orcid.org/zhangxt22@mails.tsinghua.edu.cn</orcidid><orcidid>https://orcid.org/qingxu@tsinghua.edu.cn</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2162-237X |
ispartof | IEEE transaction on neural networks and learning systems, 2024-12, p.1-15 |
issn | 2162-237X |
language | eng |
recordid | cdi_ieee_primary_10792938 |
source | IEEE Electronic Library (IEL) Journals |
subjects | Artificial neural networks Conformal Hamiltonian Convergence Dynamical systems Heuristic algorithms Kinetic energy nonconvex stochastic optimization Optimization reinforcement learning (RL) Stability criteria Stochastic processes symplectic preservation Thermal stability Training training stability |
title | Conformal Symplectic Optimization for Stable Reinforcement Learning |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T22%3A02%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Conformal%20Symplectic%20Optimization%20for%20Stable%20Reinforcement%20Learning&rft.jtitle=IEEE%20transaction%20on%20neural%20networks%20and%20learning%20systems&rft.au=Lyu,%20Yao&rft.date=2024-12-10&rft.spage=1&rft.epage=15&rft.pages=1-15&rft.issn=2162-237X&rft.coden=ITNNAL&rft_id=info:doi/10.1109/TNNLS.2024.3511670&rft_dat=%3Cieee%3E10792938%3C/ieee%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i650-b2923e59d960a1e74c5839ed436b7682347ca5cfecfcd6d2ca774523bb4620a63%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10792938&rfr_iscdi=true |