1. <inline-formula><tex-math notation="LaTeX">$\text{Omni-CNN}$</tex-math></inline-formula>: A Modality-Agnostic Neural Network for mmWave Beam Selection
- Author
-
Salehi, Batool, Roy, Debashri, Jian, Tong, Dick, Chris, Ioannidis, Stratis, and Chowdhury, Kaushik
- Abstract
Vehicles today are equipped with different sensors, such as GPS, camera, and LiDAR. We propose Omni-CNN, a machine-learning (ML) based framework that accepts one or more of such available sensor inputs to speed up beam selection in vehicular networks, instead of performing an exhaustive search among all possible beams. Omni-CNN proposes a modality-agnostic approach, where a single shared model can accommodate any combination of these sensor inputs but with appropriate weight-selection masks specific to each modality. In Omni-CNN, we first use the residual capacity of the shared model to learn the predictive task for the current modality. Second, we propose an adaptive algorithm to calculate the required capacity on a per-layer basis and release the excess capacity for the next modalities. In our design, we use the gradients to identify the sparsity constraints that result in minimum capacity usage, while maintaining the accuracy. Third, given the sparsity constraints, we solve an optimization problem to select modality-specific sub-models using Alternating Directions-Method of Multipliers (ADMM) algorithm. Finally, we include decision level aggregation to handle scenarios, where more than one modality is present. Results on a challenging real-world dataset reveal that Omni-CNN reduces the overall model size by 91.4%, while achieving 80.89% accuracy in the prediction of the optimal beam. Furthermore, it reduces the beam selection overhead by 99.37% while retaining 93.34% of the throughput, compared to the 802.11ad standard.
- Published
- 2024
- Full Text
- View/download PDF