Loading…

Deep Reinforcement Learning-Based Computation Offloading in UAV Swarm-Enabled Edge Computing for Surveillance Applications

The rapid development of the Internet of Things and wireless communication has resulted in the emergence of many latency-constrained and computation-intensive applications such as surveillance, virtual reality, and disaster monitoring. To satisfy the computational demand and reduce the prolonged tra...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access 2023-01, Vol.11, p.1-1
Main Authors: Asiful Huda, S. M., Moh, Sangman
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c409t-fd3b100f4d032aade475762f66fe91ef7585646891f76c11b6c523f6df597bd3
cites cdi_FETCH-LOGICAL-c409t-fd3b100f4d032aade475762f66fe91ef7585646891f76c11b6c523f6df597bd3
container_end_page 1
container_issue
container_start_page 1
container_title IEEE access
container_volume 11
creator Asiful Huda, S. M.
Moh, Sangman
description The rapid development of the Internet of Things and wireless communication has resulted in the emergence of many latency-constrained and computation-intensive applications such as surveillance, virtual reality, and disaster monitoring. To satisfy the computational demand and reduce the prolonged transmission delay to the cloud, mobile edge computing (MEC) has evolved as a potential candidate that can improve task completion efficiency in a reliable fashion. Owing to its high mobile nature and ease of use, as promising candidates, unmanned aerial vehicles (UAVs) can be incorporated with MEC to support such computation-intensive and latency-critical applications. However, determining the ideal offloading decision for the UAV on basis of the task characteristics still remains a crucial challenge. In this paper, we investigate a surveillance application scenario of a hierarchical UAV swarm that includes an UAV-enabled MEC with a team of UAVs surveilling the area to be monitored. To determine the optimal offloading policy, we propose a deep reinforcement learning based computation offloading (DRLCO) scheme using double deep Q-learning, which minimizes the weighted sum cost by jointly considering task execution delay and energy consumption. A performance study shows that the proposed DRLCO technique significantly outperforms conventional schemes in terms of offloading cost, energy consumption, and task execution delay. The better convergence and effectiveness of the proposed method over conventional schemes are also demonstrated.
doi_str_mv 10.1109/ACCESS.2023.3292938
format article
fullrecord <record><control><sourceid>proquest_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_efd9007618b341dcb2e7a0fccd2f59e9</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10174639</ieee_id><doaj_id>oai_doaj_org_article_efd9007618b341dcb2e7a0fccd2f59e9</doaj_id><sourcerecordid>2836053725</sourcerecordid><originalsourceid>FETCH-LOGICAL-c409t-fd3b100f4d032aade475762f66fe91ef7585646891f76c11b6c523f6df597bd3</originalsourceid><addsrcrecordid>eNpNkV9r2zAUxU3ZoKXrJ2gfBHt2pj-WZD1mXrYVAoWl26uQpaug4Eie7Gxsn35KHUb1InF0zu9eOFV1T_CKEKw-rLtus9utKKZsxaiiirVX1Q0lQtWMM_Hm1fu6upumAy6nLRKXN9XfTwAj-gYh-pQtHCHOaAsmxxD39UczgUNdOo6n2cwhRfTk_ZCMK58oRPR9_QPtfpt8rDfR9EPxbtweLoGzpzDR7pR_QRgGEy2g9TgOwb6wpnfVW2-GCe4u9231_Hnz3H2tt09fHrv1trYNVnPtHesJxr5xmFFjHDSSS0G9EB4UAS95y0UjWkW8FJaQXlhOmRfOcyV7x26rxwXrkjnoMYejyX90MkG_CCnvtclzsANo8E5hLAVpe9YQZ3sK0mBvraMFBqqw3i-sMaefJ5hmfUinHMv2mrZMYM4k5cXFFpfNaZoy-P9TCdbnyvRSmT5Xpi-VldTDkgoA8CpBZCOYYv8AFIaTqA</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2836053725</pqid></control><display><type>article</type><title>Deep Reinforcement Learning-Based Computation Offloading in UAV Swarm-Enabled Edge Computing for Surveillance Applications</title><source>IEEE Xplore Open Access Journals</source><creator>Asiful Huda, S. M. ; Moh, Sangman</creator><creatorcontrib>Asiful Huda, S. M. ; Moh, Sangman</creatorcontrib><description>The rapid development of the Internet of Things and wireless communication has resulted in the emergence of many latency-constrained and computation-intensive applications such as surveillance, virtual reality, and disaster monitoring. To satisfy the computational demand and reduce the prolonged transmission delay to the cloud, mobile edge computing (MEC) has evolved as a potential candidate that can improve task completion efficiency in a reliable fashion. Owing to its high mobile nature and ease of use, as promising candidates, unmanned aerial vehicles (UAVs) can be incorporated with MEC to support such computation-intensive and latency-critical applications. However, determining the ideal offloading decision for the UAV on basis of the task characteristics still remains a crucial challenge. In this paper, we investigate a surveillance application scenario of a hierarchical UAV swarm that includes an UAV-enabled MEC with a team of UAVs surveilling the area to be monitored. To determine the optimal offloading policy, we propose a deep reinforcement learning based computation offloading (DRLCO) scheme using double deep Q-learning, which minimizes the weighted sum cost by jointly considering task execution delay and energy consumption. A performance study shows that the proposed DRLCO technique significantly outperforms conventional schemes in terms of offloading cost, energy consumption, and task execution delay. The better convergence and effectiveness of the proposed method over conventional schemes are also demonstrated.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2023.3292938</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Aerial computing ; Autonomous aerial vehicles ; Cloud computing ; Computation offloading ; Costs ; Deep learning ; deep reinforcement learning ; Delay ; Delays ; double deep Q-learning ; Edge computing ; Energy consumption ; Internet of Things ; Mobile computing ; mobile edge computing ; Multi-access edge computing ; multi-agent reinforcement learning ; Multi-agent systems ; Network latency ; Q-learning ; Servers ; Surveillance ; Task analysis ; unmanned aerial vehicle ; Unmanned aerial vehicles ; Virtual reality ; Wireless communications</subject><ispartof>IEEE access, 2023-01, Vol.11, p.1-1</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c409t-fd3b100f4d032aade475762f66fe91ef7585646891f76c11b6c523f6df597bd3</citedby><cites>FETCH-LOGICAL-c409t-fd3b100f4d032aade475762f66fe91ef7585646891f76c11b6c523f6df597bd3</cites><orcidid>0000-0001-9175-3400 ; 0000-0002-7192-1654</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10174639$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,27610,27901,27902,54908</link.rule.ids></links><search><creatorcontrib>Asiful Huda, S. M.</creatorcontrib><creatorcontrib>Moh, Sangman</creatorcontrib><title>Deep Reinforcement Learning-Based Computation Offloading in UAV Swarm-Enabled Edge Computing for Surveillance Applications</title><title>IEEE access</title><addtitle>Access</addtitle><description>The rapid development of the Internet of Things and wireless communication has resulted in the emergence of many latency-constrained and computation-intensive applications such as surveillance, virtual reality, and disaster monitoring. To satisfy the computational demand and reduce the prolonged transmission delay to the cloud, mobile edge computing (MEC) has evolved as a potential candidate that can improve task completion efficiency in a reliable fashion. Owing to its high mobile nature and ease of use, as promising candidates, unmanned aerial vehicles (UAVs) can be incorporated with MEC to support such computation-intensive and latency-critical applications. However, determining the ideal offloading decision for the UAV on basis of the task characteristics still remains a crucial challenge. In this paper, we investigate a surveillance application scenario of a hierarchical UAV swarm that includes an UAV-enabled MEC with a team of UAVs surveilling the area to be monitored. To determine the optimal offloading policy, we propose a deep reinforcement learning based computation offloading (DRLCO) scheme using double deep Q-learning, which minimizes the weighted sum cost by jointly considering task execution delay and energy consumption. A performance study shows that the proposed DRLCO technique significantly outperforms conventional schemes in terms of offloading cost, energy consumption, and task execution delay. The better convergence and effectiveness of the proposed method over conventional schemes are also demonstrated.</description><subject>Aerial computing</subject><subject>Autonomous aerial vehicles</subject><subject>Cloud computing</subject><subject>Computation offloading</subject><subject>Costs</subject><subject>Deep learning</subject><subject>deep reinforcement learning</subject><subject>Delay</subject><subject>Delays</subject><subject>double deep Q-learning</subject><subject>Edge computing</subject><subject>Energy consumption</subject><subject>Internet of Things</subject><subject>Mobile computing</subject><subject>mobile edge computing</subject><subject>Multi-access edge computing</subject><subject>multi-agent reinforcement learning</subject><subject>Multi-agent systems</subject><subject>Network latency</subject><subject>Q-learning</subject><subject>Servers</subject><subject>Surveillance</subject><subject>Task analysis</subject><subject>unmanned aerial vehicle</subject><subject>Unmanned aerial vehicles</subject><subject>Virtual reality</subject><subject>Wireless communications</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>DOA</sourceid><recordid>eNpNkV9r2zAUxU3ZoKXrJ2gfBHt2pj-WZD1mXrYVAoWl26uQpaug4Eie7Gxsn35KHUb1InF0zu9eOFV1T_CKEKw-rLtus9utKKZsxaiiirVX1Q0lQtWMM_Hm1fu6upumAy6nLRKXN9XfTwAj-gYh-pQtHCHOaAsmxxD39UczgUNdOo6n2cwhRfTk_ZCMK58oRPR9_QPtfpt8rDfR9EPxbtweLoGzpzDR7pR_QRgGEy2g9TgOwb6wpnfVW2-GCe4u9231_Hnz3H2tt09fHrv1trYNVnPtHesJxr5xmFFjHDSSS0G9EB4UAS95y0UjWkW8FJaQXlhOmRfOcyV7x26rxwXrkjnoMYejyX90MkG_CCnvtclzsANo8E5hLAVpe9YQZ3sK0mBvraMFBqqw3i-sMaefJ5hmfUinHMv2mrZMYM4k5cXFFpfNaZoy-P9TCdbnyvRSmT5Xpi-VldTDkgoA8CpBZCOYYv8AFIaTqA</recordid><startdate>20230101</startdate><enddate>20230101</enddate><creator>Asiful Huda, S. M.</creator><creator>Moh, Sangman</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0001-9175-3400</orcidid><orcidid>https://orcid.org/0000-0002-7192-1654</orcidid></search><sort><creationdate>20230101</creationdate><title>Deep Reinforcement Learning-Based Computation Offloading in UAV Swarm-Enabled Edge Computing for Surveillance Applications</title><author>Asiful Huda, S. M. ; Moh, Sangman</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c409t-fd3b100f4d032aade475762f66fe91ef7585646891f76c11b6c523f6df597bd3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Aerial computing</topic><topic>Autonomous aerial vehicles</topic><topic>Cloud computing</topic><topic>Computation offloading</topic><topic>Costs</topic><topic>Deep learning</topic><topic>deep reinforcement learning</topic><topic>Delay</topic><topic>Delays</topic><topic>double deep Q-learning</topic><topic>Edge computing</topic><topic>Energy consumption</topic><topic>Internet of Things</topic><topic>Mobile computing</topic><topic>mobile edge computing</topic><topic>Multi-access edge computing</topic><topic>multi-agent reinforcement learning</topic><topic>Multi-agent systems</topic><topic>Network latency</topic><topic>Q-learning</topic><topic>Servers</topic><topic>Surveillance</topic><topic>Task analysis</topic><topic>unmanned aerial vehicle</topic><topic>Unmanned aerial vehicles</topic><topic>Virtual reality</topic><topic>Wireless communications</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Asiful Huda, S. M.</creatorcontrib><creatorcontrib>Moh, Sangman</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Xplore Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library Online</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Asiful Huda, S. M.</au><au>Moh, Sangman</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep Reinforcement Learning-Based Computation Offloading in UAV Swarm-Enabled Edge Computing for Surveillance Applications</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2023-01-01</date><risdate>2023</risdate><volume>11</volume><spage>1</spage><epage>1</epage><pages>1-1</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>The rapid development of the Internet of Things and wireless communication has resulted in the emergence of many latency-constrained and computation-intensive applications such as surveillance, virtual reality, and disaster monitoring. To satisfy the computational demand and reduce the prolonged transmission delay to the cloud, mobile edge computing (MEC) has evolved as a potential candidate that can improve task completion efficiency in a reliable fashion. Owing to its high mobile nature and ease of use, as promising candidates, unmanned aerial vehicles (UAVs) can be incorporated with MEC to support such computation-intensive and latency-critical applications. However, determining the ideal offloading decision for the UAV on basis of the task characteristics still remains a crucial challenge. In this paper, we investigate a surveillance application scenario of a hierarchical UAV swarm that includes an UAV-enabled MEC with a team of UAVs surveilling the area to be monitored. To determine the optimal offloading policy, we propose a deep reinforcement learning based computation offloading (DRLCO) scheme using double deep Q-learning, which minimizes the weighted sum cost by jointly considering task execution delay and energy consumption. A performance study shows that the proposed DRLCO technique significantly outperforms conventional schemes in terms of offloading cost, energy consumption, and task execution delay. The better convergence and effectiveness of the proposed method over conventional schemes are also demonstrated.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/ACCESS.2023.3292938</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0001-9175-3400</orcidid><orcidid>https://orcid.org/0000-0002-7192-1654</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2169-3536
ispartof IEEE access, 2023-01, Vol.11, p.1-1
issn 2169-3536
2169-3536
language eng
recordid cdi_doaj_primary_oai_doaj_org_article_efd9007618b341dcb2e7a0fccd2f59e9
source IEEE Xplore Open Access Journals
subjects Aerial computing
Autonomous aerial vehicles
Cloud computing
Computation offloading
Costs
Deep learning
deep reinforcement learning
Delay
Delays
double deep Q-learning
Edge computing
Energy consumption
Internet of Things
Mobile computing
mobile edge computing
Multi-access edge computing
multi-agent reinforcement learning
Multi-agent systems
Network latency
Q-learning
Servers
Surveillance
Task analysis
unmanned aerial vehicle
Unmanned aerial vehicles
Virtual reality
Wireless communications
title Deep Reinforcement Learning-Based Computation Offloading in UAV Swarm-Enabled Edge Computing for Surveillance Applications
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-08T17%3A09%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20Reinforcement%20Learning-Based%20Computation%20Offloading%20in%20UAV%20Swarm-Enabled%20Edge%20Computing%20for%20Surveillance%20Applications&rft.jtitle=IEEE%20access&rft.au=Asiful%20Huda,%20S.%20M.&rft.date=2023-01-01&rft.volume=11&rft.spage=1&rft.epage=1&rft.pages=1-1&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2023.3292938&rft_dat=%3Cproquest_doaj_%3E2836053725%3C/proquest_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c409t-fd3b100f4d032aade475762f66fe91ef7585646891f76c11b6c523f6df597bd3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2836053725&rft_id=info:pmid/&rft_ieee_id=10174639&rfr_iscdi=true