Loading…
A Scalable Deep Reinforcement Learning Approach for Traffic Engineering Based on Link Control
As modern communication networks are growing more complicated and dynamic, designing a good Traffic Engineering (TE) policy becomes difficult due to the complexity of solving the optimal traffic scheduling problem. Deep Reinforcement Learning (DRL) provides us with a chance to design a model-free TE...
Saved in:
Published in: | IEEE communications letters 2021-01, Vol.25 (1), p.171-175 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c225t-6522a5f9fe9e20dc93b6f7e035ddbedf8de7447d1d065158453a1bea1b5b09013 |
---|---|
cites | cdi_FETCH-LOGICAL-c225t-6522a5f9fe9e20dc93b6f7e035ddbedf8de7447d1d065158453a1bea1b5b09013 |
container_end_page | 175 |
container_issue | 1 |
container_start_page | 171 |
container_title | IEEE communications letters |
container_volume | 25 |
creator | Sun, Penghao Lan, Julong Li, Junfei Zhang, Jianpeng Hu, Yuxiang Guo, Zehua |
description | As modern communication networks are growing more complicated and dynamic, designing a good Traffic Engineering (TE) policy becomes difficult due to the complexity of solving the optimal traffic scheduling problem. Deep Reinforcement Learning (DRL) provides us with a chance to design a model-free TE scheme through machine learning. However, existing DRL-based TE solutions cannot be applied to large networks. In this article, we propose to combine the control theory and DRL to design a TE scheme. Our proposed scheme ScaleDRL employs the idea from the pinning control theory to select a subset of links in the network and name them critical links. Based on the traffic distribution information, we use a DRL algorithm to dynamically adjust the link weights for the critical links. Through a weighted shortest path algorithm, the forwarding paths of the flows can be dynamically adjusted. The packet-level simulation shows that ScaleDRL reduces the average end-to-end transmission delay by up to 39% compared to the state-of-the-art in different network topologies. |
doi_str_mv | 10.1109/LCOMM.2020.3022064 |
format | article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2477247755</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9187430</ieee_id><sourcerecordid>2477247755</sourcerecordid><originalsourceid>FETCH-LOGICAL-c225t-6522a5f9fe9e20dc93b6f7e035ddbedf8de7447d1d065158453a1bea1b5b09013</originalsourceid><addsrcrecordid>eNo9kMtOwzAQRS0EEqXwA7CxxDpl7MRxvCyhPKRUlaAsUeQk45KSOsFJF_w9Dq1YzEOae2dGh5BrBjPGQN1l6Wq5nHHgMAuBc4ijEzJhQiQB9-nU95CoQEqVnJOLvt8CQMIFm5CPOX0rdaOLBukDYkdfsbamdSXu0A40Q-1sbTd03nWu1eUn9TO6dtqYuqQLu6ktohsF97rHiraWZrX9omlrB9c2l-TM6KbHq2OdkvfHxTp9DrLV00s6z4KSczEEseBcC6MMKuRQlSosYiMRQlFVBVYmqVBGkaxYBbFgIolEqFmBPkQBClg4JbeHvf7J7z32Q75t9876kzmPpBxDCK_iB1Xp2r53aPLO1TvtfnIG-Ygx_8OYjxjzI0ZvujmYakT8NyiWyCiE8Bei8G4g</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2477247755</pqid></control><display><type>article</type><title>A Scalable Deep Reinforcement Learning Approach for Traffic Engineering Based on Link Control</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Sun, Penghao ; Lan, Julong ; Li, Junfei ; Zhang, Jianpeng ; Hu, Yuxiang ; Guo, Zehua</creator><creatorcontrib>Sun, Penghao ; Lan, Julong ; Li, Junfei ; Zhang, Jianpeng ; Hu, Yuxiang ; Guo, Zehua</creatorcontrib><description>As modern communication networks are growing more complicated and dynamic, designing a good Traffic Engineering (TE) policy becomes difficult due to the complexity of solving the optimal traffic scheduling problem. Deep Reinforcement Learning (DRL) provides us with a chance to design a model-free TE scheme through machine learning. However, existing DRL-based TE solutions cannot be applied to large networks. In this article, we propose to combine the control theory and DRL to design a TE scheme. Our proposed scheme ScaleDRL employs the idea from the pinning control theory to select a subset of links in the network and name them critical links. Based on the traffic distribution information, we use a DRL algorithm to dynamically adjust the link weights for the critical links. Through a weighted shortest path algorithm, the forwarding paths of the flows can be dynamically adjusted. The packet-level simulation shows that ScaleDRL reduces the average end-to-end transmission delay by up to 39% compared to the state-of-the-art in different network topologies.</description><identifier>ISSN: 1089-7798</identifier><identifier>EISSN: 1558-2558</identifier><identifier>DOI: 10.1109/LCOMM.2020.3022064</identifier><identifier>CODEN: ICLEF6</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Algorithms ; Communication networks ; Control theory ; Deep learning ; Deep reinforcement learning ; Heuristic algorithms ; Learning (artificial intelligence) ; Links ; Machine learning ; Network topologies ; Neural networks ; pinning control ; Routing ; software-defined networking ; Traffic control ; Traffic engineering ; Traffic information</subject><ispartof>IEEE communications letters, 2021-01, Vol.25 (1), p.171-175</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c225t-6522a5f9fe9e20dc93b6f7e035ddbedf8de7447d1d065158453a1bea1b5b09013</citedby><cites>FETCH-LOGICAL-c225t-6522a5f9fe9e20dc93b6f7e035ddbedf8de7447d1d065158453a1bea1b5b09013</cites><orcidid>0000-0001-7314-410X ; 0000-0002-8606-9337</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9187430$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27923,27924,54795</link.rule.ids></links><search><creatorcontrib>Sun, Penghao</creatorcontrib><creatorcontrib>Lan, Julong</creatorcontrib><creatorcontrib>Li, Junfei</creatorcontrib><creatorcontrib>Zhang, Jianpeng</creatorcontrib><creatorcontrib>Hu, Yuxiang</creatorcontrib><creatorcontrib>Guo, Zehua</creatorcontrib><title>A Scalable Deep Reinforcement Learning Approach for Traffic Engineering Based on Link Control</title><title>IEEE communications letters</title><addtitle>COML</addtitle><description>As modern communication networks are growing more complicated and dynamic, designing a good Traffic Engineering (TE) policy becomes difficult due to the complexity of solving the optimal traffic scheduling problem. Deep Reinforcement Learning (DRL) provides us with a chance to design a model-free TE scheme through machine learning. However, existing DRL-based TE solutions cannot be applied to large networks. In this article, we propose to combine the control theory and DRL to design a TE scheme. Our proposed scheme ScaleDRL employs the idea from the pinning control theory to select a subset of links in the network and name them critical links. Based on the traffic distribution information, we use a DRL algorithm to dynamically adjust the link weights for the critical links. Through a weighted shortest path algorithm, the forwarding paths of the flows can be dynamically adjusted. The packet-level simulation shows that ScaleDRL reduces the average end-to-end transmission delay by up to 39% compared to the state-of-the-art in different network topologies.</description><subject>Algorithms</subject><subject>Communication networks</subject><subject>Control theory</subject><subject>Deep learning</subject><subject>Deep reinforcement learning</subject><subject>Heuristic algorithms</subject><subject>Learning (artificial intelligence)</subject><subject>Links</subject><subject>Machine learning</subject><subject>Network topologies</subject><subject>Neural networks</subject><subject>pinning control</subject><subject>Routing</subject><subject>software-defined networking</subject><subject>Traffic control</subject><subject>Traffic engineering</subject><subject>Traffic information</subject><issn>1089-7798</issn><issn>1558-2558</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNo9kMtOwzAQRS0EEqXwA7CxxDpl7MRxvCyhPKRUlaAsUeQk45KSOsFJF_w9Dq1YzEOae2dGh5BrBjPGQN1l6Wq5nHHgMAuBc4ijEzJhQiQB9-nU95CoQEqVnJOLvt8CQMIFm5CPOX0rdaOLBukDYkdfsbamdSXu0A40Q-1sbTd03nWu1eUn9TO6dtqYuqQLu6ktohsF97rHiraWZrX9omlrB9c2l-TM6KbHq2OdkvfHxTp9DrLV00s6z4KSczEEseBcC6MMKuRQlSosYiMRQlFVBVYmqVBGkaxYBbFgIolEqFmBPkQBClg4JbeHvf7J7z32Q75t9876kzmPpBxDCK_iB1Xp2r53aPLO1TvtfnIG-Ygx_8OYjxjzI0ZvujmYakT8NyiWyCiE8Bei8G4g</recordid><startdate>202101</startdate><enddate>202101</enddate><creator>Sun, Penghao</creator><creator>Lan, Julong</creator><creator>Li, Junfei</creator><creator>Zhang, Jianpeng</creator><creator>Hu, Yuxiang</creator><creator>Guo, Zehua</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0001-7314-410X</orcidid><orcidid>https://orcid.org/0000-0002-8606-9337</orcidid></search><sort><creationdate>202101</creationdate><title>A Scalable Deep Reinforcement Learning Approach for Traffic Engineering Based on Link Control</title><author>Sun, Penghao ; Lan, Julong ; Li, Junfei ; Zhang, Jianpeng ; Hu, Yuxiang ; Guo, Zehua</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c225t-6522a5f9fe9e20dc93b6f7e035ddbedf8de7447d1d065158453a1bea1b5b09013</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Algorithms</topic><topic>Communication networks</topic><topic>Control theory</topic><topic>Deep learning</topic><topic>Deep reinforcement learning</topic><topic>Heuristic algorithms</topic><topic>Learning (artificial intelligence)</topic><topic>Links</topic><topic>Machine learning</topic><topic>Network topologies</topic><topic>Neural networks</topic><topic>pinning control</topic><topic>Routing</topic><topic>software-defined networking</topic><topic>Traffic control</topic><topic>Traffic engineering</topic><topic>Traffic information</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Sun, Penghao</creatorcontrib><creatorcontrib>Lan, Julong</creatorcontrib><creatorcontrib>Li, Junfei</creatorcontrib><creatorcontrib>Zhang, Jianpeng</creatorcontrib><creatorcontrib>Hu, Yuxiang</creatorcontrib><creatorcontrib>Guo, Zehua</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library Online</collection><collection>CrossRef</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE communications letters</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Sun, Penghao</au><au>Lan, Julong</au><au>Li, Junfei</au><au>Zhang, Jianpeng</au><au>Hu, Yuxiang</au><au>Guo, Zehua</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Scalable Deep Reinforcement Learning Approach for Traffic Engineering Based on Link Control</atitle><jtitle>IEEE communications letters</jtitle><stitle>COML</stitle><date>2021-01</date><risdate>2021</risdate><volume>25</volume><issue>1</issue><spage>171</spage><epage>175</epage><pages>171-175</pages><issn>1089-7798</issn><eissn>1558-2558</eissn><coden>ICLEF6</coden><abstract>As modern communication networks are growing more complicated and dynamic, designing a good Traffic Engineering (TE) policy becomes difficult due to the complexity of solving the optimal traffic scheduling problem. Deep Reinforcement Learning (DRL) provides us with a chance to design a model-free TE scheme through machine learning. However, existing DRL-based TE solutions cannot be applied to large networks. In this article, we propose to combine the control theory and DRL to design a TE scheme. Our proposed scheme ScaleDRL employs the idea from the pinning control theory to select a subset of links in the network and name them critical links. Based on the traffic distribution information, we use a DRL algorithm to dynamically adjust the link weights for the critical links. Through a weighted shortest path algorithm, the forwarding paths of the flows can be dynamically adjusted. The packet-level simulation shows that ScaleDRL reduces the average end-to-end transmission delay by up to 39% compared to the state-of-the-art in different network topologies.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/LCOMM.2020.3022064</doi><tpages>5</tpages><orcidid>https://orcid.org/0000-0001-7314-410X</orcidid><orcidid>https://orcid.org/0000-0002-8606-9337</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1089-7798 |
ispartof | IEEE communications letters, 2021-01, Vol.25 (1), p.171-175 |
issn | 1089-7798 1558-2558 |
language | eng |
recordid | cdi_proquest_journals_2477247755 |
source | IEEE Electronic Library (IEL) Journals |
subjects | Algorithms Communication networks Control theory Deep learning Deep reinforcement learning Heuristic algorithms Learning (artificial intelligence) Links Machine learning Network topologies Neural networks pinning control Routing software-defined networking Traffic control Traffic engineering Traffic information |
title | A Scalable Deep Reinforcement Learning Approach for Traffic Engineering Based on Link Control |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-12T18%3A50%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Scalable%20Deep%20Reinforcement%20Learning%20Approach%20for%20Traffic%20Engineering%20Based%20on%20Link%20Control&rft.jtitle=IEEE%20communications%20letters&rft.au=Sun,%20Penghao&rft.date=2021-01&rft.volume=25&rft.issue=1&rft.spage=171&rft.epage=175&rft.pages=171-175&rft.issn=1089-7798&rft.eissn=1558-2558&rft.coden=ICLEF6&rft_id=info:doi/10.1109/LCOMM.2020.3022064&rft_dat=%3Cproquest_cross%3E2477247755%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c225t-6522a5f9fe9e20dc93b6f7e035ddbedf8de7447d1d065158453a1bea1b5b09013%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2477247755&rft_id=info:pmid/&rft_ieee_id=9187430&rfr_iscdi=true |