1. Towards Scalable and Stable Parallelization of Nonlinear RNNs
- Author
-
Gonzalez, Xavier, Warrington, Andrew, Smith, Jimmy T. H., and Linderman, Scott W.
- Subjects
Computer Science - Machine Learning ,I.2.6 - Abstract
Conventional nonlinear RNNs are not naturally parallelizable across the sequence length, unlike transformers and linear RNNs. Lim et. al. (2024) therefore tackle parallelized evaluation of nonlinear RNNs, posing it as a fixed point problem solved with Newton's method. By deriving and applying a parallelized form of Newton's method, they achieve large speedups over sequential evaluation. However, their approach inherits cubic computational complexity and numerical instability. We tackle these weaknesses. To reduce the computational complexity, we apply quasi-Newton approximations and show they converge comparably, use less memory, and are faster, compared to full-Newton. To stabilize Newton's method, we leverage a connection between Newton's method damped with trust regions and Kalman smoothing. This connection allows us to stabilize the iteration, per the trust region, and use efficient parallelized Kalman algorithms to retain performance. We compare these methods empirically and highlight use cases where each algorithm excels., Comment: 25 pages, 8 figures, NeurIPS 2024
- Published
- 2024