Loading…

A novel Q-learning-based routing scheme using an intelligent filtering algorithm for flying ad hoc networks (FANETs)

The flying ad hoc network (FANET) is an emerging network focused on unmanned aerial vehicles (UAVs) that has attracted the attention of researchers around the world. Due to the cooperation between UAVs in this network, data transfer between these UAVs is very essential. Routing protocols must determ...

Full description

Saved in:
Bibliographic Details
Published in:Journal of King Saud University. Computer and information sciences 2023-12, Vol.35 (10), p.101817, Article 101817
Main Authors: Hosseinzadeh, Mehdi, Ali, Saqib, Ionescu-Feleaga, Liliana, Ionescu, Bogdan-Stefan, Yousefpoor, Mohammad Sadegh, Yousefpoor, Efat, Ahmed, Omed Hassan, Rahmani, Amir Masoud, Mehmood, Asif
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The flying ad hoc network (FANET) is an emerging network focused on unmanned aerial vehicles (UAVs) that has attracted the attention of researchers around the world. Due to the cooperation between UAVs in this network, data transfer between these UAVs is very essential. Routing protocols must determine how to make routing paths for each UAV with others in a wireless ad hoc network to facilitate the data transmission between UAVs. Nowadays, reinforcement learning (RL), especially Q-learning, is an effective response for solving existing challenges in the routing approaches and adding features such as autonomous, self-adaptive, and self-learning to these approaches. In this paper, Q-learning is used to enhance and increase network performance, and a Q-learning-based routing method using an intelligent filtering algorithm called QRF is presented for FANETs. The main innovation in this paper is that QRF manages the size of the state space using the proposed filtering algorithm. This will increase the convergence rate of the Q-learning-based routing algorithm. On the other hand, QRF regulates the learning parameters related to Q-learning so that this scheme is better adapted to the FANET environment. In the last step, the network simulator version 2 (NS2) is employed to execute the simulation process related to QRF. In this process, five evaluation criteria, namely energy consumption, packet delivery rate, overhead, end-to-end delay, and network longevity are evaluated, and the results obtained from QRF are compared with those of QFAN, QTAR, and QGeo. The simulation results in this paper show that QRF makes a balanced energy distribution between UAVs and thus extends the network longevity. Moreover, the intelligent filtering algorithm designed in QRF has reduced delay in the routing process but is associated with communication overhead.
ISSN:1319-1578
2213-1248
DOI:10.1016/j.jksuci.2023.101817