Loading…

An Enhanced Tree Routing Based on Reinforcement Learning in Wireless Sensor Networks

In wireless sensor networks, tree-based routing can achieve a low control overhead and high responsiveness by eliminating the path search and avoiding the use of extensive broadcast messages. However, existing approaches face difficulty in finding an optimal parent node, owing to conflicting perform...

Full description

Saved in:
Bibliographic Details
Published in:Sensors (Basel, Switzerland) Switzerland), 2022-12, Vol.23 (1), p.223
Main Authors: Kim, Beom-Su, Suh, Beomkyu, Seo, In Jin, Lee, Han Byul, Gong, Ji Seon, Kim, Ki-Il
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c508t-5184255996b557d2bb2a63bae2012c2fa96e121f4b2b9d1eacbb035a0c372e493
cites cdi_FETCH-LOGICAL-c508t-5184255996b557d2bb2a63bae2012c2fa96e121f4b2b9d1eacbb035a0c372e493
container_end_page
container_issue 1
container_start_page 223
container_title Sensors (Basel, Switzerland)
container_volume 23
creator Kim, Beom-Su
Suh, Beomkyu
Seo, In Jin
Lee, Han Byul
Gong, Ji Seon
Kim, Ki-Il
description In wireless sensor networks, tree-based routing can achieve a low control overhead and high responsiveness by eliminating the path search and avoiding the use of extensive broadcast messages. However, existing approaches face difficulty in finding an optimal parent node, owing to conflicting performance metrics such as reliability, latency, and energy efficiency. To strike a balance between these multiple objectives, in this paper, we revisit a classic problem of finding an optimal parent node in a tree topology. Our key idea is to find the best parent node by utilizing empirical data about the network obtained through Q-learning. Specifically, we define a state space, action set, and reward function using multiple cognitive metrics, and then find the best parent node through trial and error. Simulation results demonstrate that the proposed solution can achieve better performance regarding end-to-end delay, packet delivery ratio, and energy consumption compared with existing approaches.
doi_str_mv 10.3390/s23010223
format article
fullrecord <record><control><sourceid>gale_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_46055eb861c446f591aba9d06376f1e4</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A746534526</galeid><doaj_id>oai_doaj_org_article_46055eb861c446f591aba9d06376f1e4</doaj_id><sourcerecordid>A746534526</sourcerecordid><originalsourceid>FETCH-LOGICAL-c508t-5184255996b557d2bb2a63bae2012c2fa96e121f4b2b9d1eacbb035a0c372e493</originalsourceid><addsrcrecordid>eNpdkk1v1DAQhiMEoqVw4A-gSFzgsMUefyS-IC1VgUorkMoijpbtTLZeEru1ExD_Hm-3rFrkg62ZZ97x2G9VvaTklDFF3mVghBIA9qg6phz4ogUgj--dj6pnOW8JAcZY-7Q6YlJS2QI9rtbLUJ-HKxMcdvU6IdaXcZ582NQfTC6hGOpL9KGPyeGIYapXaFLY5X2of_iEA-Zcf8OQY6q_4PQ7pp_5efWkN0PGF3f7SfX94_n67PNi9fXTxdlytXCCtNNC0JaDEEpJK0TTgbVgJLMGgVBw0BslkQLtuQWrOorGWUuYMMSxBpArdlJd7HW7aLb6OvnRpD86Gq9vAzFttEmTdwNqLokQaFtJHeeyF4oaa1RHJGtkT5EXrfd7revZjti5MmsywwPRh5ngr_Qm_tKqBU54UwTe3AmkeDNjnvTos8NhMAHjnDU0kqqWtZQU9PV_6DbOKZSnuqWANJTsqNM9tTFlgN0flL6urA5H72LA3pf4suFSMC5AloK3-wKXYs4J-8PtKdE7o-iDUQr76v64B_KfM9hfcHy2Sw</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2761207100</pqid></control><display><type>article</type><title>An Enhanced Tree Routing Based on Reinforcement Learning in Wireless Sensor Networks</title><source>PubMed Central Free</source><source>Publicly Available Content (ProQuest)</source><creator>Kim, Beom-Su ; Suh, Beomkyu ; Seo, In Jin ; Lee, Han Byul ; Gong, Ji Seon ; Kim, Ki-Il</creator><creatorcontrib>Kim, Beom-Su ; Suh, Beomkyu ; Seo, In Jin ; Lee, Han Byul ; Gong, Ji Seon ; Kim, Ki-Il</creatorcontrib><description>In wireless sensor networks, tree-based routing can achieve a low control overhead and high responsiveness by eliminating the path search and avoiding the use of extensive broadcast messages. However, existing approaches face difficulty in finding an optimal parent node, owing to conflicting performance metrics such as reliability, latency, and energy efficiency. To strike a balance between these multiple objectives, in this paper, we revisit a classic problem of finding an optimal parent node in a tree topology. Our key idea is to find the best parent node by utilizing empirical data about the network obtained through Q-learning. Specifically, we define a state space, action set, and reward function using multiple cognitive metrics, and then find the best parent node through trial and error. Simulation results demonstrate that the proposed solution can achieve better performance regarding end-to-end delay, packet delivery ratio, and energy consumption compared with existing approaches.</description><identifier>ISSN: 1424-8220</identifier><identifier>EISSN: 1424-8220</identifier><identifier>DOI: 10.3390/s23010223</identifier><identifier>PMID: 36616821</identifier><language>eng</language><publisher>Switzerland: MDPI AG</publisher><subject>Analysis ; Business metrics ; Decision making ; Energy consumption ; Energy efficiency ; Mathematical optimization ; Monitoring systems ; multiple objectives ; Network latency ; Nodes ; Q-learning ; reinforcement learning ; Reinforcement learning (Machine learning) ; Sensors ; Topology ; tree-based routing ; Wireless networks ; Wireless sensor networks ; wireless sensor networks (WSNs)</subject><ispartof>Sensors (Basel, Switzerland), 2022-12, Vol.23 (1), p.223</ispartof><rights>COPYRIGHT 2022 MDPI AG</rights><rights>2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>2022 by the authors. 2022</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c508t-5184255996b557d2bb2a63bae2012c2fa96e121f4b2b9d1eacbb035a0c372e493</citedby><cites>FETCH-LOGICAL-c508t-5184255996b557d2bb2a63bae2012c2fa96e121f4b2b9d1eacbb035a0c372e493</cites><orcidid>0000-0002-8366-3533</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2761207100/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2761207100?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>230,314,727,780,784,885,25753,27924,27925,37012,37013,44590,53791,53793,75126</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/36616821$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Kim, Beom-Su</creatorcontrib><creatorcontrib>Suh, Beomkyu</creatorcontrib><creatorcontrib>Seo, In Jin</creatorcontrib><creatorcontrib>Lee, Han Byul</creatorcontrib><creatorcontrib>Gong, Ji Seon</creatorcontrib><creatorcontrib>Kim, Ki-Il</creatorcontrib><title>An Enhanced Tree Routing Based on Reinforcement Learning in Wireless Sensor Networks</title><title>Sensors (Basel, Switzerland)</title><addtitle>Sensors (Basel)</addtitle><description>In wireless sensor networks, tree-based routing can achieve a low control overhead and high responsiveness by eliminating the path search and avoiding the use of extensive broadcast messages. However, existing approaches face difficulty in finding an optimal parent node, owing to conflicting performance metrics such as reliability, latency, and energy efficiency. To strike a balance between these multiple objectives, in this paper, we revisit a classic problem of finding an optimal parent node in a tree topology. Our key idea is to find the best parent node by utilizing empirical data about the network obtained through Q-learning. Specifically, we define a state space, action set, and reward function using multiple cognitive metrics, and then find the best parent node through trial and error. Simulation results demonstrate that the proposed solution can achieve better performance regarding end-to-end delay, packet delivery ratio, and energy consumption compared with existing approaches.</description><subject>Analysis</subject><subject>Business metrics</subject><subject>Decision making</subject><subject>Energy consumption</subject><subject>Energy efficiency</subject><subject>Mathematical optimization</subject><subject>Monitoring systems</subject><subject>multiple objectives</subject><subject>Network latency</subject><subject>Nodes</subject><subject>Q-learning</subject><subject>reinforcement learning</subject><subject>Reinforcement learning (Machine learning)</subject><subject>Sensors</subject><subject>Topology</subject><subject>tree-based routing</subject><subject>Wireless networks</subject><subject>Wireless sensor networks</subject><subject>wireless sensor networks (WSNs)</subject><issn>1424-8220</issn><issn>1424-8220</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNpdkk1v1DAQhiMEoqVw4A-gSFzgsMUefyS-IC1VgUorkMoijpbtTLZeEru1ExD_Hm-3rFrkg62ZZ97x2G9VvaTklDFF3mVghBIA9qg6phz4ogUgj--dj6pnOW8JAcZY-7Q6YlJS2QI9rtbLUJ-HKxMcdvU6IdaXcZ582NQfTC6hGOpL9KGPyeGIYapXaFLY5X2of_iEA-Zcf8OQY6q_4PQ7pp_5efWkN0PGF3f7SfX94_n67PNi9fXTxdlytXCCtNNC0JaDEEpJK0TTgbVgJLMGgVBw0BslkQLtuQWrOorGWUuYMMSxBpArdlJd7HW7aLb6OvnRpD86Gq9vAzFttEmTdwNqLokQaFtJHeeyF4oaa1RHJGtkT5EXrfd7revZjti5MmsywwPRh5ngr_Qm_tKqBU54UwTe3AmkeDNjnvTos8NhMAHjnDU0kqqWtZQU9PV_6DbOKZSnuqWANJTsqNM9tTFlgN0flL6urA5H72LA3pf4suFSMC5AloK3-wKXYs4J-8PtKdE7o-iDUQr76v64B_KfM9hfcHy2Sw</recordid><startdate>20221226</startdate><enddate>20221226</enddate><creator>Kim, Beom-Su</creator><creator>Suh, Beomkyu</creator><creator>Seo, In Jin</creator><creator>Lee, Han Byul</creator><creator>Gong, Ji Seon</creator><creator>Kim, Ki-Il</creator><general>MDPI AG</general><general>MDPI</general><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>K9.</scope><scope>M0S</scope><scope>M1P</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-8366-3533</orcidid></search><sort><creationdate>20221226</creationdate><title>An Enhanced Tree Routing Based on Reinforcement Learning in Wireless Sensor Networks</title><author>Kim, Beom-Su ; Suh, Beomkyu ; Seo, In Jin ; Lee, Han Byul ; Gong, Ji Seon ; Kim, Ki-Il</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c508t-5184255996b557d2bb2a63bae2012c2fa96e121f4b2b9d1eacbb035a0c372e493</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Analysis</topic><topic>Business metrics</topic><topic>Decision making</topic><topic>Energy consumption</topic><topic>Energy efficiency</topic><topic>Mathematical optimization</topic><topic>Monitoring systems</topic><topic>multiple objectives</topic><topic>Network latency</topic><topic>Nodes</topic><topic>Q-learning</topic><topic>reinforcement learning</topic><topic>Reinforcement learning (Machine learning)</topic><topic>Sensors</topic><topic>Topology</topic><topic>tree-based routing</topic><topic>Wireless networks</topic><topic>Wireless sensor networks</topic><topic>wireless sensor networks (WSNs)</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Kim, Beom-Su</creatorcontrib><creatorcontrib>Suh, Beomkyu</creatorcontrib><creatorcontrib>Seo, In Jin</creatorcontrib><creatorcontrib>Lee, Han Byul</creatorcontrib><creatorcontrib>Gong, Ji Seon</creatorcontrib><creatorcontrib>Kim, Ki-Il</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Health &amp; Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>Health &amp; Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>Sensors (Basel, Switzerland)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kim, Beom-Su</au><au>Suh, Beomkyu</au><au>Seo, In Jin</au><au>Lee, Han Byul</au><au>Gong, Ji Seon</au><au>Kim, Ki-Il</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>An Enhanced Tree Routing Based on Reinforcement Learning in Wireless Sensor Networks</atitle><jtitle>Sensors (Basel, Switzerland)</jtitle><addtitle>Sensors (Basel)</addtitle><date>2022-12-26</date><risdate>2022</risdate><volume>23</volume><issue>1</issue><spage>223</spage><pages>223-</pages><issn>1424-8220</issn><eissn>1424-8220</eissn><abstract>In wireless sensor networks, tree-based routing can achieve a low control overhead and high responsiveness by eliminating the path search and avoiding the use of extensive broadcast messages. However, existing approaches face difficulty in finding an optimal parent node, owing to conflicting performance metrics such as reliability, latency, and energy efficiency. To strike a balance between these multiple objectives, in this paper, we revisit a classic problem of finding an optimal parent node in a tree topology. Our key idea is to find the best parent node by utilizing empirical data about the network obtained through Q-learning. Specifically, we define a state space, action set, and reward function using multiple cognitive metrics, and then find the best parent node through trial and error. Simulation results demonstrate that the proposed solution can achieve better performance regarding end-to-end delay, packet delivery ratio, and energy consumption compared with existing approaches.</abstract><cop>Switzerland</cop><pub>MDPI AG</pub><pmid>36616821</pmid><doi>10.3390/s23010223</doi><orcidid>https://orcid.org/0000-0002-8366-3533</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1424-8220
ispartof Sensors (Basel, Switzerland), 2022-12, Vol.23 (1), p.223
issn 1424-8220
1424-8220
language eng
recordid cdi_doaj_primary_oai_doaj_org_article_46055eb861c446f591aba9d06376f1e4
source PubMed Central Free; Publicly Available Content (ProQuest)
subjects Analysis
Business metrics
Decision making
Energy consumption
Energy efficiency
Mathematical optimization
Monitoring systems
multiple objectives
Network latency
Nodes
Q-learning
reinforcement learning
Reinforcement learning (Machine learning)
Sensors
Topology
tree-based routing
Wireless networks
Wireless sensor networks
wireless sensor networks (WSNs)
title An Enhanced Tree Routing Based on Reinforcement Learning in Wireless Sensor Networks
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-03T21%3A37%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=An%20Enhanced%20Tree%20Routing%20Based%20on%20Reinforcement%20Learning%20in%20Wireless%20Sensor%20Networks&rft.jtitle=Sensors%20(Basel,%20Switzerland)&rft.au=Kim,%20Beom-Su&rft.date=2022-12-26&rft.volume=23&rft.issue=1&rft.spage=223&rft.pages=223-&rft.issn=1424-8220&rft.eissn=1424-8220&rft_id=info:doi/10.3390/s23010223&rft_dat=%3Cgale_doaj_%3EA746534526%3C/gale_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c508t-5184255996b557d2bb2a63bae2012c2fa96e121f4b2b9d1eacbb035a0c372e493%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2761207100&rft_id=info:pmid/36616821&rft_galeid=A746534526&rfr_iscdi=true