Loading…
Traffic Steering for Cellular-Enabled UAVs: A Federated Deep Reinforcement Learning Approach
This paper investigates the fundamental traffic steering issue for cellular-enabled unmanned aerial vehicles (UAVs), where each UAV needs to select one from different Mobile Network Operators (MNOs) to steer its traffic for improving the Quality-of-Service (QoS). To this end, we first formulate the...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This paper investigates the fundamental traffic steering issue for cellular-enabled unmanned aerial vehicles (UAVs), where each UAV needs to select one from different Mobile Network Operators (MNOs) to steer its traffic for improving the Quality-of-Service (QoS). To this end, we first formulate the issue as an optimization problem aiming to minimize the maximum outage probabilities of the UAVs. This problem is non-convex and non-linear, which is generally difficult to be solved. We propose a solution based on the framework of deep reinforcement learning (DRL) to solve it, in which we define the environment and the agent elements. Furthermore, to avoid sharing the learned experiences by the UAV in this solution, we further propose a federated deep reinforcement learning (FDRL)-based solution. Specifically, each UAV serves as a distributed agent to train separate model, and is then communicated to a special agent (dubbed coordinator) to aggregate all training models. Moreover, to optimize the aggregation process, we also introduce a FDRL with DRL-based aggregation (DRL2A) approach, in which the coordinator implements a DRL algorithm to learn optimal parameters of the aggregation. We consider deep Q-learning (DQN) algorithm for the distributed agents and Advantage Actor-Critic (A2C) for the coordinator. Simulation results are presented to validate the effectiveness of the proposed approach. |
---|---|
ISSN: | 1938-1883 |
DOI: | 10.1109/ICC45041.2023.10279441 |