1. A Reinforcement Learning Based Data Caching in Wireless Networks.
- Author
-
Sheraz, Muhammad, Shafique, Shahryar, Imran, Sohail, Asif, Muhammad, Ullah, Rizwan, Ibrar, Muhammad, Khan, Jahanzeb, and Wuttisittikulkij, Lunchakorn
- Subjects
REINFORCEMENT learning ,CACHE memory ,DATA libraries ,ARTIFICIAL intelligence ,LOCATION data ,TRAFFIC congestion - Abstract
Data caching has emerged as a promising technique to handle growing data traffic and backhaul congestion of wireless networks. However, there is a concern regarding how and where to place contents to optimize data access by the users. Data caching can be exploited close to users by deploying cache entities at Small Base Stations (SBSs). In this approach, SBSs cache contents through the core network during off-peak traffic hours. Then, SBSs provide cached contents to content-demanding users during peak traffic hours with low latency. In this paper, we exploit the potential of data caching at the SBS level to minimize data access delay. We propose an intelligence-based data caching mechanism inspired by an artificial intelligence approach known as Reinforcement Learning (RL). Our proposed RL-based data caching mechanism is adaptive to dynamic learning and tracks network states to capture users' diverse and varying data demands. Our proposed approach optimizes data caching at the SBS level by observing users' data demands and locations to efficiently utilize the limited cache resources of SBS. Extensive simulations are performed to evaluate the performance of proposed caching mechanism based on various factors such as caching capacity, data library size, etc. The obtained results demonstrate that our proposed caching mechanism achieves 4% performance gain in terms of delay vs. contents, 3.5% performance gain in terms of delay vs. users, 2.6% performance gain in terms of delay vs. cache capacity, 18% performance gain in terms of percentage traffic offloading vs. popularity skewness (γ), and 6% performance gain in terms of backhaul saving vs. cache capacity. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF