Loading…

Using Deep Reinforcement Learning for mmWave Real-Time Scheduling

We study the problem of real-time scheduling in a multi-hop millimeter-wave (mmWave) mesh. We develop a model-free deep reinforcement learning algorithm called Adaptive Activator RL (AARL), which determines the subset of mmWave links that should be activated during each time slot and the power level...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2023-02
Main Authors: Gahtan, Barak, Cohen, Reuven, Bronstein, Alex M, Kedar, Gil
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Gahtan, Barak
Cohen, Reuven
Bronstein, Alex M
Kedar, Gil
description We study the problem of real-time scheduling in a multi-hop millimeter-wave (mmWave) mesh. We develop a model-free deep reinforcement learning algorithm called Adaptive Activator RL (AARL), which determines the subset of mmWave links that should be activated during each time slot and the power level for each link. The most important property of AARL is its ability to make scheduling decisions within the strict time slot constraints of typical 5G mmWave networks. AARL can handle a variety of network topologies, network loads, and interference models, it can also adapt to different workloads. We demonstrate the operation of AARL on several topologies: a small topology with 10 links, a moderately-sized mesh with 48 links, and a large topology with 96 links. We show that for each topology, we compare the throughput obtained by AARL to that of a benchmark algorithm called RPMA (Residual Profit Maximizer Algorithm). The most important advantage of AARL compared to RPMA is that it is much faster and can make the necessary scheduling decisions very rapidly during every time slot, while RPMA cannot. In addition, the quality of the scheduling decisions made by AARL outperforms those made by RPMA.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2721477542</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2721477542</sourcerecordid><originalsourceid>FETCH-proquest_journals_27214775423</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mRwDC3OzEtXcElNLVAISs3MS8svSk7NTc0rUfBJTSzKA8kBhRRyc8MTy1KBKhJzdEMyc1MVgpMzUlNKc4DyPAysaYk5xam8UJqbQdnNNcTZQ7egKL-wNLW4JD4rv7QoDygVb2RuZGhibm5qYmRMnCoAPww49w</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2721477542</pqid></control><display><type>article</type><title>Using Deep Reinforcement Learning for mmWave Real-Time Scheduling</title><source>ProQuest - Publicly Available Content Database</source><creator>Gahtan, Barak ; Cohen, Reuven ; Bronstein, Alex M ; Kedar, Gil</creator><creatorcontrib>Gahtan, Barak ; Cohen, Reuven ; Bronstein, Alex M ; Kedar, Gil</creatorcontrib><description>We study the problem of real-time scheduling in a multi-hop millimeter-wave (mmWave) mesh. We develop a model-free deep reinforcement learning algorithm called Adaptive Activator RL (AARL), which determines the subset of mmWave links that should be activated during each time slot and the power level for each link. The most important property of AARL is its ability to make scheduling decisions within the strict time slot constraints of typical 5G mmWave networks. AARL can handle a variety of network topologies, network loads, and interference models, it can also adapt to different workloads. We demonstrate the operation of AARL on several topologies: a small topology with 10 links, a moderately-sized mesh with 48 links, and a large topology with 96 links. We show that for each topology, we compare the throughput obtained by AARL to that of a benchmark algorithm called RPMA (Residual Profit Maximizer Algorithm). The most important advantage of AARL compared to RPMA is that it is much faster and can make the necessary scheduling decisions very rapidly during every time slot, while RPMA cannot. In addition, the quality of the scheduling decisions made by AARL outperforms those made by RPMA.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>5G mobile communication ; Adaptive algorithms ; Algorithms ; Deep learning ; Finite element method ; Greedy algorithms ; Links ; Machine learning ; Millimeter waves ; Network topologies ; Real time ; Scheduling</subject><ispartof>arXiv.org, 2023-02</ispartof><rights>2023. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2721477542?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Gahtan, Barak</creatorcontrib><creatorcontrib>Cohen, Reuven</creatorcontrib><creatorcontrib>Bronstein, Alex M</creatorcontrib><creatorcontrib>Kedar, Gil</creatorcontrib><title>Using Deep Reinforcement Learning for mmWave Real-Time Scheduling</title><title>arXiv.org</title><description>We study the problem of real-time scheduling in a multi-hop millimeter-wave (mmWave) mesh. We develop a model-free deep reinforcement learning algorithm called Adaptive Activator RL (AARL), which determines the subset of mmWave links that should be activated during each time slot and the power level for each link. The most important property of AARL is its ability to make scheduling decisions within the strict time slot constraints of typical 5G mmWave networks. AARL can handle a variety of network topologies, network loads, and interference models, it can also adapt to different workloads. We demonstrate the operation of AARL on several topologies: a small topology with 10 links, a moderately-sized mesh with 48 links, and a large topology with 96 links. We show that for each topology, we compare the throughput obtained by AARL to that of a benchmark algorithm called RPMA (Residual Profit Maximizer Algorithm). The most important advantage of AARL compared to RPMA is that it is much faster and can make the necessary scheduling decisions very rapidly during every time slot, while RPMA cannot. In addition, the quality of the scheduling decisions made by AARL outperforms those made by RPMA.</description><subject>5G mobile communication</subject><subject>Adaptive algorithms</subject><subject>Algorithms</subject><subject>Deep learning</subject><subject>Finite element method</subject><subject>Greedy algorithms</subject><subject>Links</subject><subject>Machine learning</subject><subject>Millimeter waves</subject><subject>Network topologies</subject><subject>Real time</subject><subject>Scheduling</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mRwDC3OzEtXcElNLVAISs3MS8svSk7NTc0rUfBJTSzKA8kBhRRyc8MTy1KBKhJzdEMyc1MVgpMzUlNKc4DyPAysaYk5xam8UJqbQdnNNcTZQ7egKL-wNLW4JD4rv7QoDygVb2RuZGhibm5qYmRMnCoAPww49w</recordid><startdate>20230218</startdate><enddate>20230218</enddate><creator>Gahtan, Barak</creator><creator>Cohen, Reuven</creator><creator>Bronstein, Alex M</creator><creator>Kedar, Gil</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20230218</creationdate><title>Using Deep Reinforcement Learning for mmWave Real-Time Scheduling</title><author>Gahtan, Barak ; Cohen, Reuven ; Bronstein, Alex M ; Kedar, Gil</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_27214775423</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>5G mobile communication</topic><topic>Adaptive algorithms</topic><topic>Algorithms</topic><topic>Deep learning</topic><topic>Finite element method</topic><topic>Greedy algorithms</topic><topic>Links</topic><topic>Machine learning</topic><topic>Millimeter waves</topic><topic>Network topologies</topic><topic>Real time</topic><topic>Scheduling</topic><toplevel>online_resources</toplevel><creatorcontrib>Gahtan, Barak</creatorcontrib><creatorcontrib>Cohen, Reuven</creatorcontrib><creatorcontrib>Bronstein, Alex M</creatorcontrib><creatorcontrib>Kedar, Gil</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>ProQuest - Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Gahtan, Barak</au><au>Cohen, Reuven</au><au>Bronstein, Alex M</au><au>Kedar, Gil</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Using Deep Reinforcement Learning for mmWave Real-Time Scheduling</atitle><jtitle>arXiv.org</jtitle><date>2023-02-18</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>We study the problem of real-time scheduling in a multi-hop millimeter-wave (mmWave) mesh. We develop a model-free deep reinforcement learning algorithm called Adaptive Activator RL (AARL), which determines the subset of mmWave links that should be activated during each time slot and the power level for each link. The most important property of AARL is its ability to make scheduling decisions within the strict time slot constraints of typical 5G mmWave networks. AARL can handle a variety of network topologies, network loads, and interference models, it can also adapt to different workloads. We demonstrate the operation of AARL on several topologies: a small topology with 10 links, a moderately-sized mesh with 48 links, and a large topology with 96 links. We show that for each topology, we compare the throughput obtained by AARL to that of a benchmark algorithm called RPMA (Residual Profit Maximizer Algorithm). The most important advantage of AARL compared to RPMA is that it is much faster and can make the necessary scheduling decisions very rapidly during every time slot, while RPMA cannot. In addition, the quality of the scheduling decisions made by AARL outperforms those made by RPMA.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-02
issn 2331-8422
language eng
recordid cdi_proquest_journals_2721477542
source ProQuest - Publicly Available Content Database
subjects 5G mobile communication
Adaptive algorithms
Algorithms
Deep learning
Finite element method
Greedy algorithms
Links
Machine learning
Millimeter waves
Network topologies
Real time
Scheduling
title Using Deep Reinforcement Learning for mmWave Real-Time Scheduling
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T05%3A59%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Using%20Deep%20Reinforcement%20Learning%20for%20mmWave%20Real-Time%20Scheduling&rft.jtitle=arXiv.org&rft.au=Gahtan,%20Barak&rft.date=2023-02-18&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2721477542%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_27214775423%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2721477542&rft_id=info:pmid/&rfr_iscdi=true