1. A Distributed Reinforcement Learning Approach for Energy and Congestion-Aware Edge Networks
- Author
-
Guido Marchetto, Alessio Sacco, and Flavio Esposito
- Subjects
reinforcement learning ,Computer science ,business.industry ,auto-scaling ,Distributed computing ,Quality of service ,Network virtualization ,020206 networking & telecommunications ,020302 automobile design & engineering ,Cloud computing ,02 engineering and technology ,Energy consumption ,Automation ,Network congestion ,self-driving networks ,0203 mechanical engineering ,0202 electrical engineering, electronic engineering, information engineering ,Reinforcement learning ,Orchestration (computing) ,business - Abstract
The abiding attempt of automation has also pervaded computer networks, with the ability to measure, analyze, and control themselves in an automated manner, by reacting to changes in the environment (e.g., demand) while exploiting existing flexibilities. When provided with these features, networks are often referred to as "self-driving". Network virtualization and machine learning are the drivers. In this regard, the provision and orchestration of physical or virtual resources are crucial for both Quality of Service guarantees and cost management in the edge/cloud computing ecosystem. Auto-scaling mechanisms are hence essential to effectively manage the lifecycle of network resources. In this poster, we propose Relevant, a distributed reinforcement learning approach to enable distributed automation for network orchestrators. Our solution aims at solving the congestion control problem within Software-Defined Network infrastructures, while being mindful of the energy consumption, helping resources to scale up and down as traffic demands fluctuate and energy optimization opportunities arise.
- Published
- 2020