Back to Search Start Over

Reinforcement Learning-Empowered Mobile Edge Computing for 6G Edge Intelligence

Authors :
Peng Wei
Kun Guo
Ye Li
Jue Wang
Wei Feng
Shi Jin
Ning Ge
Ying-Chang Liang
Source :
IEEE Access, Vol 10, Pp 65156-65192 (2022)
Publication Year :
2022
Publisher :
IEEE, 2022.

Abstract

Mobile edge computing (MEC) is considered a novel paradigm for computation-intensive and delay-sensitive tasks in fifth generation (5G) networks and beyond. However, its uncertainty, referred to as dynamic and randomness, from the mobile device, wireless channel, and edge network sides, results in high-dimensional, nonconvex, nonlinear, and NP-hard optimization problems. Thanks to the evolved reinforcement learning (RL), upon iteratively interacting with the dynamic and random environment, its trained agent can intelligently obtain the optimal policy in MEC. Furthermore, its evolved versions, such as deep reinforcement learning (DRL), can achieve higher convergence speed efficiency and learning accuracy based on the parametric approximation for the large-scale state-action space. This paper provides a comprehensive research review on RL-enabled MEC and offers insight for development in this area. More importantly, associated with free mobility, dynamic channels, and distributed services, the MEC challenges that can be solved by different kinds of RL algorithms are identified, followed by how they can be solved by RL solutions in diverse mobile applications. Finally, the open challenges are discussed to provide helpful guidance for future research in RL training and learning MEC.

Details

Language :
English
ISSN :
21693536
Volume :
10
Database :
Directory of Open Access Journals
Journal :
IEEE Access
Publication Type :
Academic Journal
Accession number :
edsdoj.7dafade1954d403093489582f8e1e783
Document Type :
article
Full Text :
https://doi.org/10.1109/ACCESS.2022.3183647