1. Smart collaborative optimizations strategy for mobile edge computing based on deep reinforcement learning.
- Author
-
Fang, Juan, Zhang, Mengyuan, Ye, Zhiyuan, Shi, Jiamei, and Wei, Jianhua
- Subjects
- *
DEEP learning , *MOBILE computing , *EDGE computing , *REINFORCEMENT learning , *ALGORITHMS , *MACHINE learning , *MOBILE learning - Abstract
• The combination of mobile devices and edge computing reduces the total computing overhead and makes resource allocation reasonable. • Evaluate system performance by the weighted sum of delay and energy consumption. • Change the parameter update method to accelerate the convergence speed of the deep reinforcement learning algorithm. • A neural network is used to approximate the behavior value function, and a target q network is used to update the target. With the arrival of the 5th generation mobile networks (5 G) era, the data needed by mobile devices (MDs) is explosively growing. High-consumption, low-latency applications are huge challenges for resource-constrained Internet of things (IoT) devices. Mobile edge computing overcomes the limitations of computing resources on MDs by offloading tasks generated by MDs and assigning them to nearby MEC servers. Therefore, mobile edge computing (MEC) becomes important. This paper presents a task offloading strategy for the multi-device multi-server system. To meet the task requirements of different MDs, we formulate an overhead minimization problem to optimize the delay and energy consumption of the system. We propose the Double Deep Q Network (Double-DQN) algorithm to perform location selection strategies for tasks generated on the mobile devices and allocate respective computing resources. Simulation results show that the algorithm can allocate resources reasonably and reduce the overhead of the entire system. [Display omitted] [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF