1. Link Scheduling in Satellite Networks via Machine Learning Over Riemannian Manifolds
- Author
-
Joarder Jafor Sadique, Imtiaz Nasim, and Ahmed S. Ibrahim
- Subjects
Convolutional neural network ,LEO satellite ,link scheduling ,recurrent neural network ,Riemannian geometry ,symmetric positive definite matrices ,Telecommunication ,TK5101-6720 ,Transportation and communications ,HE1-9990 - Abstract
Low Earth Orbit (LEO) satellites play a crucial role in enhancing global connectivity, serving a complementary solution to existing terrestrial systems. In wireless networks, scheduling is a vital process that allocates time-frequency resources to users for interference management. However, LEO satellite networks face significant challenges in scheduling their links towards ground users due to the satellites’ mobility and overlapping coverage. This paper addresses the dynamic link scheduling problem in LEO satellite networks by considering spatio-temporal correlations introduced by the satellites’ movements. The first step in the proposed solution involves modeling the network over Riemannian manifolds, thanks to their representation as symmetric positive definite matrices. We introduce two machine learning (ML)-based link scheduling techniques that model the dynamic evolution of satellite positions and link conditions over time and space. To accurately predict satellite link states, we present a recurrent neural network (RNN) over Riemannian manifolds, which captures spatio-temporal characteristics over time. Furthermore, we introduce a separate model, the convolutional neural network (CNN) over Riemannian manifolds, which captures geometric relationships between satellites and users by extracting spatial features from the network topology across all links. Simulation results demonstrate that both RNN and CNN over Riemannian manifolds deliver comparable performance to the fractional programming-based link scheduling (FPLinQ) benchmark. Remarkably, unlike other ML-based models that require extensive training data, both models only need 30 training samples to achieve over 99% of the sum rate while maintaining similar computational complexity relative to the benchmark.
- Published
- 2025
- Full Text
- View/download PDF