Back to Search Start Over

Online Sparse Beamforming in C-RAN: A Deep Reinforcement Learning Approach

Authors :
Chong-Hao Zhong
Kun Guo
Mingxiong Zhao
Source :
WCNC
Publication Year :
2021
Publisher :
IEEE, 2021.

Abstract

Higher communication rates are required given that cloud radio access network (C-RAN) becomes a significant component of 5G wireless communication, yet the problem of using sparse beamforming to maximize the achievable sum rate in the long term subject to transmit power constraints still remains open in C-RAN. Inspired by the success of Deep Reinforcement Learning (DRL) in solving dynamic programming problems, we propose a DRL-based framework for online sparse beamforming in C-RAN. Particularly, the DRL agent is in charge of remote radio head (RRH) activation based on the defined state space, action space, and reward function, and meanwhile makes a decision on transmit beamforming at active RRHs in each decision period. Through simulations, we evaluate the performance of the proposed framework by comparing it with traditional ways and show that it can achieve higher sum rate in time-varying network environment.

Details

Database :
OpenAIRE
Journal :
2021 IEEE Wireless Communications and Networking Conference (WCNC)
Accession number :
edsair.doi...........05026656df1101290e3c5d017f0607af