Back to Search
Start Over
Energy Saving in 6G O-RAN Using DQN-based xApp
- Publication Year :
- 2024
-
Abstract
- Open Radio Access Network (RAN) is a transformative paradigm that supports openness, interoperability, and intelligence, with the O-RAN architecture being the most recognized framework in academia and industry. In the context of Open RAN, the importance of Energy Saving (ES) is heightened, especially with the current direction of network densification in sixth generation of mobile networks (6G). Traditional energy-saving methods in RAN struggle with the increasing dynamics of the network. This paper proposes using Reinforcement Learning (RL), a subset of Machine Learning (ML), to improve ES. We present a novel deep RL method for ES in 6G O-RAN, implemented as xApp (ES-xApp). We developed two Deep Q-Network (DQN)-based ES-xApps. ES-xApp-1 uses RSS and User Equipment (UE) geolocations, while ES-xApp-2 uses only RSS. The proposed models significantly outperformed heuristic and baseline xApps, especially with over 20 UEs. With 50 UEs, 50% of Radio Cards (RCs) were switched off, compared to 17% with the heuristic algorithm. We have observed that more informative inputs may lead to more stable training and results. This paper highlights the necessity of energy conservation in wireless networks and offers practical strategies and evidence for future research and industry practices.<br />Comment: accepted in CAMAD 2024
- Subjects :
- Electrical Engineering and Systems Science - Signal Processing
Subjects
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2409.15098
- Document Type :
- Working Paper