1. Selective learning for sensing using shift-invariant spectrally stable undersampled networks.
- Author
-
Verma, Ankur, Goyal, Ayush, Sarma, Sanjay, and Kumara, Soundar
- Subjects
- *
REMOTE submersibles , *SAMPLING theorem , *ARTIFICIAL intelligence , *DATA augmentation , *SCIENTIFIC computing - Abstract
The amount of data collected for sensing tasks in scientific computing is based on the Shannon-Nyquist sampling theorem proposed in the 1940s. Sensor data generation will surpass 73 trillion GB by 2025 as we increase the high-fidelity digitization of the physical world. Skyrocketing data infrastructure costs and time to maintain and compute on all this data are increasingly common. To address this, we introduce a selective learning approach, where the amount of data collected is problem dependent. We develop novel shift-invariant and spectrally stable neural networks to solve real-time sensing problems formulated as classification or regression problems. We demonstrate that (i) less data can be collected while preserving information, and (ii) test accuracy improves with data augmentation (size of training data), rather than by collecting more than a certain fraction of raw data, unlike information theoretic approaches. While sampling at Nyquist rates, every data point does not have to be resolved at Nyquist and the network learns the amount of data to be collected. This has significant implications (orders of magnitude reduction) on the amount of data collected, computation, power, time, bandwidth, and latency required for several embedded applications ranging from low earth orbit economy to unmanned underwater vehicles. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF