Loading…
Deep Reinforcement Learning-Based Multi-Access in Massive Machine-Type Communication
The diverse applications of Machine-Type Communication (MTC) lead to exponential growth in Machine to Machine traffic. In connection with MTC deployment, a large number of devices are expected to access the wireless network simultaneously, resulting in network congestion. The conventional Random Acc...
Saved in:
Published in: | IEEE access 2024, Vol.12, p.178690-178704 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | 178704 |
container_issue | |
container_start_page | 178690 |
container_title | IEEE access |
container_volume | 12 |
creator | Ravi, Nasim Lourenco, Nuno Curado, Marilia Edmundo Monteiro, and |
description | The diverse applications of Machine-Type Communication (MTC) lead to exponential growth in Machine to Machine traffic. In connection with MTC deployment, a large number of devices are expected to access the wireless network simultaneously, resulting in network congestion. The conventional Random Access mechanism lacks the capability to handle the large number of access attempts expected from massive MTC (mMTC). Additionally, mMTC transceivers often operate by generating short data packets in a sporadic and sometimes unpredictable manner. To address the growing need for efficient communication in massive machine-type communication scenarios we propose an innovative solution, called Deep Reinforcement Learning-Based Multi-Access (DRLMA). Our model considers the Base Station (BS) as an agent navigating the landscape of machine-type communication devices. This agent dynamically switches between grant-based and grant-free access to leverage their strengths. We address the multi-access problem, formulating it as a Partially Observable Markov Decision Process (POMDP), to better understand and tackle challenges associated with dynamic access policies. Leveraging Deep Reinforcement Learning techniques, our approach optimizes sporadic traffic patterns, crafting an adaptable access policy to maximize both network throughput and energy efficiency under the battery constraint. Simulation results show that proposed DRLMA scheme outperforms traditional access schemes and existing access protocols in sporadic traffic in terms of the energy efficiency, throughput and network life time. |
doi_str_mv | 10.1109/ACCESS.2024.3507577 |
format | article |
fullrecord | <record><control><sourceid>doaj_ieee_</sourceid><recordid>TN_cdi_ieee_primary_10769430</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10769430</ieee_id><doaj_id>oai_doaj_org_article_15302a12b1a542e0aaaa666fecd4ad76</doaj_id><sourcerecordid>oai_doaj_org_article_15302a12b1a542e0aaaa666fecd4ad76</sourcerecordid><originalsourceid>FETCH-LOGICAL-d1327-5ac904eacef27d93dce53418c277aee496bd1ef24d243b3c40b61dc68446aebc3</originalsourceid><addsrcrecordid>eNo9jM1Kw0AURgdBsNQ-gS7yAlPnf5pljdUWWgRb1-Fm5qZOaSYlkwp9e4MVz-bA98Eh5IGzKecsf5oXxWK7nQom1FRqZrW1N2QkuMmp1NLckUlKBzYwGyZtR2T3gnjKPjDEuu0cNhj7bI3QxRD39BkS-mxzPvaBzp3DlLIQsw2kFL5xsPsKEenucsKsaJvmHIODPrTxntzWcEw4-fOYfL4udsWSrt_fVsV8TT2XwlINLmcKwWEtrM-ld6il4jMnrAVElZvK8-FTXihZSadYZbh3ZqaUAaycHJPVtetbOJSnLjTQXcoWQvk7tN2-hK4P7ogl15IJ4KLioJVABgPGmBqdV-CtGVqP11ZAxP8WZ9bkSjL5Az-zaMk</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Deep Reinforcement Learning-Based Multi-Access in Massive Machine-Type Communication</title><source>IEEE Open Access Journals</source><creator>Ravi, Nasim ; Lourenco, Nuno ; Curado, Marilia ; Edmundo Monteiro, and</creator><creatorcontrib>Ravi, Nasim ; Lourenco, Nuno ; Curado, Marilia ; Edmundo Monteiro, and</creatorcontrib><description>The diverse applications of Machine-Type Communication (MTC) lead to exponential growth in Machine to Machine traffic. In connection with MTC deployment, a large number of devices are expected to access the wireless network simultaneously, resulting in network congestion. The conventional Random Access mechanism lacks the capability to handle the large number of access attempts expected from massive MTC (mMTC). Additionally, mMTC transceivers often operate by generating short data packets in a sporadic and sometimes unpredictable manner. To address the growing need for efficient communication in massive machine-type communication scenarios we propose an innovative solution, called Deep Reinforcement Learning-Based Multi-Access (DRLMA). Our model considers the Base Station (BS) as an agent navigating the landscape of machine-type communication devices. This agent dynamically switches between grant-based and grant-free access to leverage their strengths. We address the multi-access problem, formulating it as a Partially Observable Markov Decision Process (POMDP), to better understand and tackle challenges associated with dynamic access policies. Leveraging Deep Reinforcement Learning techniques, our approach optimizes sporadic traffic patterns, crafting an adaptable access policy to maximize both network throughput and energy efficiency under the battery constraint. Simulation results show that proposed DRLMA scheme outperforms traditional access schemes and existing access protocols in sporadic traffic in terms of the energy efficiency, throughput and network life time.</description><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2024.3507577</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>IEEE</publisher><subject>Base stations ; Batteries ; deep reinforcement learning ; Energy efficiency ; grant-based ; grant-free ; Interference cancellation ; Massive machine-type communication ; multiple access ; NOMA ; Resource management ; sporadic traffic ; Switches ; Throughput ; Ultra reliable low latency communication ; Uplink</subject><ispartof>IEEE access, 2024, Vol.12, p.178690-178704</ispartof><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><orcidid>0000-0002-2154-0642 ; 0000-0001-6760-4675 ; 0000-0003-1615-2925 ; 0000-0001-8248-9564</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10769430$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,4024,27633,27923,27924,27925,54933</link.rule.ids></links><search><creatorcontrib>Ravi, Nasim</creatorcontrib><creatorcontrib>Lourenco, Nuno</creatorcontrib><creatorcontrib>Curado, Marilia</creatorcontrib><creatorcontrib>Edmundo Monteiro, and</creatorcontrib><title>Deep Reinforcement Learning-Based Multi-Access in Massive Machine-Type Communication</title><title>IEEE access</title><addtitle>Access</addtitle><description>The diverse applications of Machine-Type Communication (MTC) lead to exponential growth in Machine to Machine traffic. In connection with MTC deployment, a large number of devices are expected to access the wireless network simultaneously, resulting in network congestion. The conventional Random Access mechanism lacks the capability to handle the large number of access attempts expected from massive MTC (mMTC). Additionally, mMTC transceivers often operate by generating short data packets in a sporadic and sometimes unpredictable manner. To address the growing need for efficient communication in massive machine-type communication scenarios we propose an innovative solution, called Deep Reinforcement Learning-Based Multi-Access (DRLMA). Our model considers the Base Station (BS) as an agent navigating the landscape of machine-type communication devices. This agent dynamically switches between grant-based and grant-free access to leverage their strengths. We address the multi-access problem, formulating it as a Partially Observable Markov Decision Process (POMDP), to better understand and tackle challenges associated with dynamic access policies. Leveraging Deep Reinforcement Learning techniques, our approach optimizes sporadic traffic patterns, crafting an adaptable access policy to maximize both network throughput and energy efficiency under the battery constraint. Simulation results show that proposed DRLMA scheme outperforms traditional access schemes and existing access protocols in sporadic traffic in terms of the energy efficiency, throughput and network life time.</description><subject>Base stations</subject><subject>Batteries</subject><subject>deep reinforcement learning</subject><subject>Energy efficiency</subject><subject>grant-based</subject><subject>grant-free</subject><subject>Interference cancellation</subject><subject>Massive machine-type communication</subject><subject>multiple access</subject><subject>NOMA</subject><subject>Resource management</subject><subject>sporadic traffic</subject><subject>Switches</subject><subject>Throughput</subject><subject>Ultra reliable low latency communication</subject><subject>Uplink</subject><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>DOA</sourceid><recordid>eNo9jM1Kw0AURgdBsNQ-gS7yAlPnf5pljdUWWgRb1-Fm5qZOaSYlkwp9e4MVz-bA98Eh5IGzKecsf5oXxWK7nQom1FRqZrW1N2QkuMmp1NLckUlKBzYwGyZtR2T3gnjKPjDEuu0cNhj7bI3QxRD39BkS-mxzPvaBzp3DlLIQsw2kFL5xsPsKEenucsKsaJvmHIODPrTxntzWcEw4-fOYfL4udsWSrt_fVsV8TT2XwlINLmcKwWEtrM-ld6il4jMnrAVElZvK8-FTXihZSadYZbh3ZqaUAaycHJPVtetbOJSnLjTQXcoWQvk7tN2-hK4P7ogl15IJ4KLioJVABgPGmBqdV-CtGVqP11ZAxP8WZ9bkSjL5Az-zaMk</recordid><startdate>2024</startdate><enddate>2024</enddate><creator>Ravi, Nasim</creator><creator>Lourenco, Nuno</creator><creator>Curado, Marilia</creator><creator>Edmundo Monteiro, and</creator><general>IEEE</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-2154-0642</orcidid><orcidid>https://orcid.org/0000-0001-6760-4675</orcidid><orcidid>https://orcid.org/0000-0003-1615-2925</orcidid><orcidid>https://orcid.org/0000-0001-8248-9564</orcidid></search><sort><creationdate>2024</creationdate><title>Deep Reinforcement Learning-Based Multi-Access in Massive Machine-Type Communication</title><author>Ravi, Nasim ; Lourenco, Nuno ; Curado, Marilia ; Edmundo Monteiro, and</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-d1327-5ac904eacef27d93dce53418c277aee496bd1ef24d243b3c40b61dc68446aebc3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Base stations</topic><topic>Batteries</topic><topic>deep reinforcement learning</topic><topic>Energy efficiency</topic><topic>grant-based</topic><topic>grant-free</topic><topic>Interference cancellation</topic><topic>Massive machine-type communication</topic><topic>multiple access</topic><topic>NOMA</topic><topic>Resource management</topic><topic>sporadic traffic</topic><topic>Switches</topic><topic>Throughput</topic><topic>Ultra reliable low latency communication</topic><topic>Uplink</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ravi, Nasim</creatorcontrib><creatorcontrib>Lourenco, Nuno</creatorcontrib><creatorcontrib>Curado, Marilia</creatorcontrib><creatorcontrib>Edmundo Monteiro, and</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library Online</collection><collection>Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ravi, Nasim</au><au>Lourenco, Nuno</au><au>Curado, Marilia</au><au>Edmundo Monteiro, and</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep Reinforcement Learning-Based Multi-Access in Massive Machine-Type Communication</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2024</date><risdate>2024</risdate><volume>12</volume><spage>178690</spage><epage>178704</epage><pages>178690-178704</pages><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>The diverse applications of Machine-Type Communication (MTC) lead to exponential growth in Machine to Machine traffic. In connection with MTC deployment, a large number of devices are expected to access the wireless network simultaneously, resulting in network congestion. The conventional Random Access mechanism lacks the capability to handle the large number of access attempts expected from massive MTC (mMTC). Additionally, mMTC transceivers often operate by generating short data packets in a sporadic and sometimes unpredictable manner. To address the growing need for efficient communication in massive machine-type communication scenarios we propose an innovative solution, called Deep Reinforcement Learning-Based Multi-Access (DRLMA). Our model considers the Base Station (BS) as an agent navigating the landscape of machine-type communication devices. This agent dynamically switches between grant-based and grant-free access to leverage their strengths. We address the multi-access problem, formulating it as a Partially Observable Markov Decision Process (POMDP), to better understand and tackle challenges associated with dynamic access policies. Leveraging Deep Reinforcement Learning techniques, our approach optimizes sporadic traffic patterns, crafting an adaptable access policy to maximize both network throughput and energy efficiency under the battery constraint. Simulation results show that proposed DRLMA scheme outperforms traditional access schemes and existing access protocols in sporadic traffic in terms of the energy efficiency, throughput and network life time.</abstract><pub>IEEE</pub><doi>10.1109/ACCESS.2024.3507577</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0002-2154-0642</orcidid><orcidid>https://orcid.org/0000-0001-6760-4675</orcidid><orcidid>https://orcid.org/0000-0003-1615-2925</orcidid><orcidid>https://orcid.org/0000-0001-8248-9564</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2169-3536 |
ispartof | IEEE access, 2024, Vol.12, p.178690-178704 |
issn | 2169-3536 |
language | eng |
recordid | cdi_ieee_primary_10769430 |
source | IEEE Open Access Journals |
subjects | Base stations Batteries deep reinforcement learning Energy efficiency grant-based grant-free Interference cancellation Massive machine-type communication multiple access NOMA Resource management sporadic traffic Switches Throughput Ultra reliable low latency communication Uplink |
title | Deep Reinforcement Learning-Based Multi-Access in Massive Machine-Type Communication |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-21T00%3A01%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-doaj_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20Reinforcement%20Learning-Based%20Multi-Access%20in%20Massive%20Machine-Type%20Communication&rft.jtitle=IEEE%20access&rft.au=Ravi,%20Nasim&rft.date=2024&rft.volume=12&rft.spage=178690&rft.epage=178704&rft.pages=178690-178704&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2024.3507577&rft_dat=%3Cdoaj_ieee_%3Eoai_doaj_org_article_15302a12b1a542e0aaaa666fecd4ad76%3C/doaj_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-d1327-5ac904eacef27d93dce53418c277aee496bd1ef24d243b3c40b61dc68446aebc3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10769430&rfr_iscdi=true |