Back to Search Start Over

Multi-Agent Deep Reinforcement Learning for Computation Offloading and Interference Coordination in Small Cell Networks.

Authors :
Huang, Xiaoyan
Leng, Supeng
Maharjan, Sabita
Zhang, Yan
Source :
IEEE Transactions on Vehicular Technology. Sep2021, Vol. 70 Issue 9, p9282-9293. 12p.
Publication Year :
2021

Abstract

Integrating mobile edge computing (MEC) with small cell networks has been conceived as a promising solution to provide pervasive computing services. However, the interactions among small cells due to inter-cell interference, the diverse application-specific requirements, as well as the highly dynamic wireless environment make it challenging to design an optimal computation offloading scheme. In this paper, we focus on the joint design of computation offloading and interference coordination for edge intelligence empowered small cell networks. To this end, we propose a distributed multi-agent deep reinforcement learning (DRL) scheme with the objective of minimizing the overall energy consumption while ensuring the latency requirements. Specifically, we exploit the collaboration among small cell base station (SBS) agents to adaptively adjust their strategies, considering computation offloading, channel allocation, power control, and computation resource allocation. Further, to decrease the computation complexity and signaling overhead of the training process, we design a federated DRL scheme which only requires SBS agents to share their model parameters instead of local training data. Numerical results demonstrate that our proposed schemes can significantly reduce the energy consumption and effectively guarantee the latency requirements compared with the benchmark schemes. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00189545
Volume :
70
Issue :
9
Database :
Academic Search Index
Journal :
IEEE Transactions on Vehicular Technology
Publication Type :
Academic Journal
Accession number :
153712012
Full Text :
https://doi.org/10.1109/TVT.2021.3096928