1. A Scalable Deep Reinforcement Learning Approach for Traffic Engineering Based on Link Control
- Author
-
Junfei Li, Zehua Guo, Jianpeng Zhang, Yuxiang Hu, Penghao Sun, and Julong Lan
- Subjects
Job shop scheduling ,Artificial neural network ,Transmission delay ,Computer science ,business.industry ,Distributed computing ,020206 networking & telecommunications ,02 engineering and technology ,Network topology ,Telecommunications network ,Computer Science Applications ,Traffic engineering ,Modeling and Simulation ,0202 electrical engineering, electronic engineering, information engineering ,Reinforcement learning ,Electrical and Electronic Engineering ,business ,Dijkstra's algorithm - Abstract
As modern communication networks are growing more complicated and dynamic, designing a good Traffic Engineering (TE) policy becomes difficult due to the complexity of solving the optimal traffic scheduling problem. Deep Reinforcement Learning (DRL) provides us with a chance to design a model-free TE scheme through machine learning. However, existing DRL-based TE solutions cannot be applied to large networks. In this article, we propose to combine the control theory and DRL to design a TE scheme. Our proposed scheme ScaleDRL employs the idea from the pinning control theory to select a subset of links in the network and name them critical links. Based on the traffic distribution information, we use a DRL algorithm to dynamically adjust the link weights for the critical links. Through a weighted shortest path algorithm, the forwarding paths of the flows can be dynamically adjusted. The packet-level simulation shows that ScaleDRL reduces the average end-to-end transmission delay by up to 39% compared to the state-of-the-art in different network topologies.
- Published
- 2021