Loading…
Network Slice Reconfiguration by Exploiting Deep Reinforcement Learning With Large Action Space
It is widely acknowledged that network slicing can tackle the diverse usage scenarios and connectivity services that the 5G-and-beyond system needs to support. To guarantee performance isolation while maximizing network resource utilization under dynamic traffic load, network slice needs to be recon...
Saved in:
Published in: | IEEE eTransactions on network and service management 2020-12, Vol.17 (4), p.2197-2211 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c336t-d539c5dd020da1b89343f582c8bc44eed90e71dd06aca622a97eb7c125f112b53 |
---|---|
cites | cdi_FETCH-LOGICAL-c336t-d539c5dd020da1b89343f582c8bc44eed90e71dd06aca622a97eb7c125f112b53 |
container_end_page | 2211 |
container_issue | 4 |
container_start_page | 2197 |
container_title | IEEE eTransactions on network and service management |
container_volume | 17 |
creator | Wei, Fengsheng Feng, Gang Sun, Yao Wang, Yatong Qin, Shuang Liang, Ying-Chang |
description | It is widely acknowledged that network slicing can tackle the diverse usage scenarios and connectivity services that the 5G-and-beyond system needs to support. To guarantee performance isolation while maximizing network resource utilization under dynamic traffic load, network slice needs to be reconfigured adaptively. However, it is commonly believed that the fine-grained resource reconfiguration problem is intractable due to the extremely high computational complexity caused by numerous variables. In this article, we investigate the reconfiguration within a core network slice with aim of minimizing long-term resource consumption by exploiting Deep Reinforcement Learning (DRL). This problem is also intractable by using conventional Deep Q Network (DQN), as it has a multi-dimensional discrete action space which is difficult to explore efficiently. To address the curse of dimensionality, we propose to exploit Branching Dueling Q-network which incorporates the action branching architecture into DQN to drastically decrease the number of estimated actions. Based on the discrete BDQ network, we develop an intelligent network slice reconfiguration algorithm (INSRA). Extensive simulation experiments are conducted to evaluate the performance of INSRA and the numerical results reveal that INSRA can minimize the long-term resource consumption and achieve high resource efficiency compared with several benchmark algorithms. |
doi_str_mv | 10.1109/TNSM.2020.3019248 |
format | article |
fullrecord | <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_ieee_primary_9177109</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9177109</ieee_id><sourcerecordid>2468772406</sourcerecordid><originalsourceid>FETCH-LOGICAL-c336t-d539c5dd020da1b89343f582c8bc44eed90e71dd06aca622a97eb7c125f112b53</originalsourceid><addsrcrecordid>eNpNkE1Lw0AQhhdRsFZ_gHhZ8Jy6H0k2eyy1fkCsYCsel81mUre22bhJ0f57N7aIpxmY551hHoQuKRlRSuTNYjZ_GjHCyIgTKlmcHaEBlZxFccLF8b_-FJ217YqQJAvYAKkZdF_Of-D52hrAL2BcXdnl1uvOuhoXOzz9btbOdrZe4luAJiC2rpw3sIG6wzloX_ezN9u941z7JeCx-c3OG23gHJ1Uet3CxaEO0evddDF5iPLn-8fJOI8M52kXlQmXJinL8ECpaZFJHvMqyZjJChPHAKUkIGiYp9rolDEtBRTCUJZUlLIi4UN0vd_bePe5hbZTK7f1dTipWJxmQrCYpIGie8p417YeKtV4u9F-pyhRvUfVe1S9R3XwGDJX-4wFgD9eUiECz38A5GVvRw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2468772406</pqid></control><display><type>article</type><title>Network Slice Reconfiguration by Exploiting Deep Reinforcement Learning With Large Action Space</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Wei, Fengsheng ; Feng, Gang ; Sun, Yao ; Wang, Yatong ; Qin, Shuang ; Liang, Ying-Chang</creator><creatorcontrib>Wei, Fengsheng ; Feng, Gang ; Sun, Yao ; Wang, Yatong ; Qin, Shuang ; Liang, Ying-Chang</creatorcontrib><description>It is widely acknowledged that network slicing can tackle the diverse usage scenarios and connectivity services that the 5G-and-beyond system needs to support. To guarantee performance isolation while maximizing network resource utilization under dynamic traffic load, network slice needs to be reconfigured adaptively. However, it is commonly believed that the fine-grained resource reconfiguration problem is intractable due to the extremely high computational complexity caused by numerous variables. In this article, we investigate the reconfiguration within a core network slice with aim of minimizing long-term resource consumption by exploiting Deep Reinforcement Learning (DRL). This problem is also intractable by using conventional Deep Q Network (DQN), as it has a multi-dimensional discrete action space which is difficult to explore efficiently. To address the curse of dimensionality, we propose to exploit Branching Dueling Q-network which incorporates the action branching architecture into DQN to drastically decrease the number of estimated actions. Based on the discrete BDQ network, we develop an intelligent network slice reconfiguration algorithm (INSRA). Extensive simulation experiments are conducted to evaluate the performance of INSRA and the numerical results reveal that INSRA can minimize the long-term resource consumption and achieve high resource efficiency compared with several benchmark algorithms.</description><identifier>ISSN: 1932-4537</identifier><identifier>EISSN: 1932-4537</identifier><identifier>DOI: 10.1109/TNSM.2020.3019248</identifier><identifier>CODEN: ITNSC4</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Algorithms ; branch dueling q-network ; Consumption ; core network slicing ; Deep learning ; deep reinforcement learning ; Intelligent networks ; Machine learning ; Network slice reconfiguration ; Network slicing ; Numerical models ; Optimization ; Performance evaluation ; Reconfiguration ; Resource management ; Resource utilization ; Substrates ; Uncertainty</subject><ispartof>IEEE eTransactions on network and service management, 2020-12, Vol.17 (4), p.2197-2211</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c336t-d539c5dd020da1b89343f582c8bc44eed90e71dd06aca622a97eb7c125f112b53</citedby><cites>FETCH-LOGICAL-c336t-d539c5dd020da1b89343f582c8bc44eed90e71dd06aca622a97eb7c125f112b53</cites><orcidid>0000-0002-4391-377X ; 0000-0002-2512-2392 ; 0000-0002-1276-0221 ; 0000-0003-2671-5090</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9177109$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids></links><search><creatorcontrib>Wei, Fengsheng</creatorcontrib><creatorcontrib>Feng, Gang</creatorcontrib><creatorcontrib>Sun, Yao</creatorcontrib><creatorcontrib>Wang, Yatong</creatorcontrib><creatorcontrib>Qin, Shuang</creatorcontrib><creatorcontrib>Liang, Ying-Chang</creatorcontrib><title>Network Slice Reconfiguration by Exploiting Deep Reinforcement Learning With Large Action Space</title><title>IEEE eTransactions on network and service management</title><addtitle>T-NSM</addtitle><description>It is widely acknowledged that network slicing can tackle the diverse usage scenarios and connectivity services that the 5G-and-beyond system needs to support. To guarantee performance isolation while maximizing network resource utilization under dynamic traffic load, network slice needs to be reconfigured adaptively. However, it is commonly believed that the fine-grained resource reconfiguration problem is intractable due to the extremely high computational complexity caused by numerous variables. In this article, we investigate the reconfiguration within a core network slice with aim of minimizing long-term resource consumption by exploiting Deep Reinforcement Learning (DRL). This problem is also intractable by using conventional Deep Q Network (DQN), as it has a multi-dimensional discrete action space which is difficult to explore efficiently. To address the curse of dimensionality, we propose to exploit Branching Dueling Q-network which incorporates the action branching architecture into DQN to drastically decrease the number of estimated actions. Based on the discrete BDQ network, we develop an intelligent network slice reconfiguration algorithm (INSRA). Extensive simulation experiments are conducted to evaluate the performance of INSRA and the numerical results reveal that INSRA can minimize the long-term resource consumption and achieve high resource efficiency compared with several benchmark algorithms.</description><subject>Algorithms</subject><subject>branch dueling q-network</subject><subject>Consumption</subject><subject>core network slicing</subject><subject>Deep learning</subject><subject>deep reinforcement learning</subject><subject>Intelligent networks</subject><subject>Machine learning</subject><subject>Network slice reconfiguration</subject><subject>Network slicing</subject><subject>Numerical models</subject><subject>Optimization</subject><subject>Performance evaluation</subject><subject>Reconfiguration</subject><subject>Resource management</subject><subject>Resource utilization</subject><subject>Substrates</subject><subject>Uncertainty</subject><issn>1932-4537</issn><issn>1932-4537</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><recordid>eNpNkE1Lw0AQhhdRsFZ_gHhZ8Jy6H0k2eyy1fkCsYCsel81mUre22bhJ0f57N7aIpxmY551hHoQuKRlRSuTNYjZ_GjHCyIgTKlmcHaEBlZxFccLF8b_-FJ217YqQJAvYAKkZdF_Of-D52hrAL2BcXdnl1uvOuhoXOzz9btbOdrZe4luAJiC2rpw3sIG6wzloX_ezN9u941z7JeCx-c3OG23gHJ1Uet3CxaEO0evddDF5iPLn-8fJOI8M52kXlQmXJinL8ECpaZFJHvMqyZjJChPHAKUkIGiYp9rolDEtBRTCUJZUlLIi4UN0vd_bePe5hbZTK7f1dTipWJxmQrCYpIGie8p417YeKtV4u9F-pyhRvUfVe1S9R3XwGDJX-4wFgD9eUiECz38A5GVvRw</recordid><startdate>202012</startdate><enddate>202012</enddate><creator>Wei, Fengsheng</creator><creator>Feng, Gang</creator><creator>Sun, Yao</creator><creator>Wang, Yatong</creator><creator>Qin, Shuang</creator><creator>Liang, Ying-Chang</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-4391-377X</orcidid><orcidid>https://orcid.org/0000-0002-2512-2392</orcidid><orcidid>https://orcid.org/0000-0002-1276-0221</orcidid><orcidid>https://orcid.org/0000-0003-2671-5090</orcidid></search><sort><creationdate>202012</creationdate><title>Network Slice Reconfiguration by Exploiting Deep Reinforcement Learning With Large Action Space</title><author>Wei, Fengsheng ; Feng, Gang ; Sun, Yao ; Wang, Yatong ; Qin, Shuang ; Liang, Ying-Chang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c336t-d539c5dd020da1b89343f582c8bc44eed90e71dd06aca622a97eb7c125f112b53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Algorithms</topic><topic>branch dueling q-network</topic><topic>Consumption</topic><topic>core network slicing</topic><topic>Deep learning</topic><topic>deep reinforcement learning</topic><topic>Intelligent networks</topic><topic>Machine learning</topic><topic>Network slice reconfiguration</topic><topic>Network slicing</topic><topic>Numerical models</topic><topic>Optimization</topic><topic>Performance evaluation</topic><topic>Reconfiguration</topic><topic>Resource management</topic><topic>Resource utilization</topic><topic>Substrates</topic><topic>Uncertainty</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wei, Fengsheng</creatorcontrib><creatorcontrib>Feng, Gang</creatorcontrib><creatorcontrib>Sun, Yao</creatorcontrib><creatorcontrib>Wang, Yatong</creatorcontrib><creatorcontrib>Qin, Shuang</creatorcontrib><creatorcontrib>Liang, Ying-Chang</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><jtitle>IEEE eTransactions on network and service management</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wei, Fengsheng</au><au>Feng, Gang</au><au>Sun, Yao</au><au>Wang, Yatong</au><au>Qin, Shuang</au><au>Liang, Ying-Chang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Network Slice Reconfiguration by Exploiting Deep Reinforcement Learning With Large Action Space</atitle><jtitle>IEEE eTransactions on network and service management</jtitle><stitle>T-NSM</stitle><date>2020-12</date><risdate>2020</risdate><volume>17</volume><issue>4</issue><spage>2197</spage><epage>2211</epage><pages>2197-2211</pages><issn>1932-4537</issn><eissn>1932-4537</eissn><coden>ITNSC4</coden><abstract>It is widely acknowledged that network slicing can tackle the diverse usage scenarios and connectivity services that the 5G-and-beyond system needs to support. To guarantee performance isolation while maximizing network resource utilization under dynamic traffic load, network slice needs to be reconfigured adaptively. However, it is commonly believed that the fine-grained resource reconfiguration problem is intractable due to the extremely high computational complexity caused by numerous variables. In this article, we investigate the reconfiguration within a core network slice with aim of minimizing long-term resource consumption by exploiting Deep Reinforcement Learning (DRL). This problem is also intractable by using conventional Deep Q Network (DQN), as it has a multi-dimensional discrete action space which is difficult to explore efficiently. To address the curse of dimensionality, we propose to exploit Branching Dueling Q-network which incorporates the action branching architecture into DQN to drastically decrease the number of estimated actions. Based on the discrete BDQ network, we develop an intelligent network slice reconfiguration algorithm (INSRA). Extensive simulation experiments are conducted to evaluate the performance of INSRA and the numerical results reveal that INSRA can minimize the long-term resource consumption and achieve high resource efficiency compared with several benchmark algorithms.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TNSM.2020.3019248</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0002-4391-377X</orcidid><orcidid>https://orcid.org/0000-0002-2512-2392</orcidid><orcidid>https://orcid.org/0000-0002-1276-0221</orcidid><orcidid>https://orcid.org/0000-0003-2671-5090</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1932-4537 |
ispartof | IEEE eTransactions on network and service management, 2020-12, Vol.17 (4), p.2197-2211 |
issn | 1932-4537 1932-4537 |
language | eng |
recordid | cdi_ieee_primary_9177109 |
source | IEEE Electronic Library (IEL) Journals |
subjects | Algorithms branch dueling q-network Consumption core network slicing Deep learning deep reinforcement learning Intelligent networks Machine learning Network slice reconfiguration Network slicing Numerical models Optimization Performance evaluation Reconfiguration Resource management Resource utilization Substrates Uncertainty |
title | Network Slice Reconfiguration by Exploiting Deep Reinforcement Learning With Large Action Space |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T03%3A16%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Network%20Slice%20Reconfiguration%20by%20Exploiting%20Deep%20Reinforcement%20Learning%20With%20Large%20Action%20Space&rft.jtitle=IEEE%20eTransactions%20on%20network%20and%20service%20management&rft.au=Wei,%20Fengsheng&rft.date=2020-12&rft.volume=17&rft.issue=4&rft.spage=2197&rft.epage=2211&rft.pages=2197-2211&rft.issn=1932-4537&rft.eissn=1932-4537&rft.coden=ITNSC4&rft_id=info:doi/10.1109/TNSM.2020.3019248&rft_dat=%3Cproquest_ieee_%3E2468772406%3C/proquest_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c336t-d539c5dd020da1b89343f582c8bc44eed90e71dd06aca622a97eb7c125f112b53%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2468772406&rft_id=info:pmid/&rft_ieee_id=9177109&rfr_iscdi=true |