Skip to yearly menu bar Skip to main content


Plenary Speaker
in
Workshop: Optimization for ML Workshop

Acceleration by Stepsize Hedging, Jason Altschuler

Jason Altschuler

[ ]
Sun 15 Dec 2 p.m. PST — 2:30 p.m. PST

Abstract:

Title: Acceleration by Stepsize Hedging

Abstract: Can we accelerate the convergence of gradient descent without changing the algorithm — just by optimizing stepsizes? Surprisingly, we show that the answer is yes. Our proposed Silver Stepsize Schedule optimizes strongly convex functions in k^(logp 2) = k^(0.7864) iterations, where p=1+sqrt{2} is the silver ratio and k is the condition number. This is intermediate between the textbook unaccelerated rate k and the accelerated rate sqrt{k} due to Nesterov in 1983. The non-strongly convex setting is conceptually identical and leads to an analogously accelerated rate \eps^{-\logp 2} = \eps^{-0.7864}. We conjecture and provide partial evidence that these rates are optimal among all possible stepsize schedules.

The Silver Stepsize Schedule is an explicit non-monotonic fractal. Why should such stepsizes help? The core intuition is “hedging” between individually suboptimal strategies — short steps and long steps — since bad cases for the former are good cases for the latter, and vice versa. Properly combining these stepsizes yields faster convergence due to the misalignment of worst-case functions. This talk is based on a line of work with Pablo Parrilo that originates from my 2018 Master’s Thesis — which established for the first time that judiciously chosen stepsizes can enable accelerated convex optimization. Prior to this thesis, the only such result was for the special case of quadratics, due to Young in 1953.

Chat is not available.