Poster
Scalable and Stable Parallelization of Nonlinear RNNs
Xavier Gonzalez · Andrew Warrington · Jimmy Smith · Scott Linderman
Conventional nonlinear RNNs are not naturally parallelizable across the sequence length, whereas transformers and linear RNNs are.Lim et al. therefore tackle parallelized evaluation of nonlinear RNNs by posing it as a fixed point problem, solved with Newton's method. By deriving and applying a parallelized form of Newton's method, they achieve huge speedups over sequential evaluation. However, their approach inherits cubic computational complexity and numerical instability. We tackle these weaknesses. To reduce the computational complexity, we apply quasi-Newton approximations and show they converge comparably to full-Newton, use less memory, and are faster.To stabilize Newton's method, we leverage a connection between Newton's method damped with trust regions and Kalman smoothing. This connection allows us to stabilize Newtons method, per the trust region, while using efficient parallelized Kalman algorithms to retain performance. We compare these methods empirically, and highlight the use cases where each algorithm excels.
Live content is unavailable. Log in and register to view live content