Loading…

A Scalable Deep Reinforcement Learning Approach for Traffic Engineering Based on Link Control

As modern communication networks are growing more complicated and dynamic, designing a good Traffic Engineering (TE) policy becomes difficult due to the complexity of solving the optimal traffic scheduling problem. Deep Reinforcement Learning (DRL) provides us with a chance to design a model-free TE...

Full description

Saved in:
Bibliographic Details
Published in:IEEE communications letters 2021-01, Vol.25 (1), p.171-175
Main Authors: Sun, Penghao, Lan, Julong, Li, Junfei, Zhang, Jianpeng, Hu, Yuxiang, Guo, Zehua
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:As modern communication networks are growing more complicated and dynamic, designing a good Traffic Engineering (TE) policy becomes difficult due to the complexity of solving the optimal traffic scheduling problem. Deep Reinforcement Learning (DRL) provides us with a chance to design a model-free TE scheme through machine learning. However, existing DRL-based TE solutions cannot be applied to large networks. In this article, we propose to combine the control theory and DRL to design a TE scheme. Our proposed scheme ScaleDRL employs the idea from the pinning control theory to select a subset of links in the network and name them critical links. Based on the traffic distribution information, we use a DRL algorithm to dynamically adjust the link weights for the critical links. Through a weighted shortest path algorithm, the forwarding paths of the flows can be dynamically adjusted. The packet-level simulation shows that ScaleDRL reduces the average end-to-end transmission delay by up to 39% compared to the state-of-the-art in different network topologies.
ISSN:1089-7798
1558-2558
DOI:10.1109/LCOMM.2020.3022064