Loading…

RLR: Joint Reinforcement Learning and Attraction Reward for Mobile Charger in Wireless Rechargeable Sensor Networks

Advances in wireless charging technology give great new opportunities for extending the lifetime of a wireless sensor network (WSN) which is an important infrastructure of IoTs. However, the existing greedy algorithms lacked learning from the experiences of energy dissipation trends. Unlike the exis...

Full description

Saved in:
Bibliographic Details
Published in:IEEE internet of things journal 2023-09, Vol.10 (18), p.1-1
Main Authors: Shang, Cuijuan, Chang, Chih-Yung, Liao, Wen-Hwa, Roy, Diptendu Sinha
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Advances in wireless charging technology give great new opportunities for extending the lifetime of a wireless sensor network (WSN) which is an important infrastructure of IoTs. However, the existing greedy algorithms lacked learning from the experiences of energy dissipation trends. Unlike the existing studies, this paper proposes a reinforcement learning approach, called RLR, for mobile charger to learn the trends of WSNs, including the energy consumption of the sensors, the recharging cost as well as the coverage benefit, aiming to maximize the coverage contribution of the recharged WSN. The proposed RLR mainly consists of three modules, including Sensor Energy Management, Charger Location Update and Charger Reinforcement Learning modules. In the Sensor Energy Management module, each sensor manages its energy and calculates its threshold for the recharging request in a distributed manner. The Charger Location Update module adopts the quorum system to ensure effective communication between sensors and the mobile charger. Meanwhile, the Charger Reinforcement Learning module employs attraction rewards to reflect the coverage benefit and penalties of waiting time raised due to charger movement and recharging other sensors. As a result, the charger accumulates the learning experiences from the Q-Table such that it is able to execute the appropriate actions of charging or moving in a manner of state management. Performance results show that the proposed RLR outperforms the existing recharging mechanisms in terms of charging waiting time of sensors, the energy usage efficiency of the mobile charger, as well as the coverage contribution of the given sensor network.
ISSN:2327-4662
2327-4662
DOI:10.1109/JIOT.2023.3267242