Skip to yearly menu bar Skip to main content


Poster

Universality of AdaGrad Stepsizes for Stochastic Optimization: Inexact Oracle, Acceleration and Variance Reduction

Anton Rodomanov · Xiaowen Jiang · Sebastian Stich

[ ]
Thu 12 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract: We present adaptive gradient methods (both basic and accelerated) for solvingconvex composite optimization problems in which the main part is approximatelysmooth (a.k.a. $(\delta, L)$-smooth) and can be accessed only via a(potentially biased) stochastic gradient oracle.This setting covers many interesting examples including Hölder smooth problemsand various inexact computations of the stochastic gradient.Our methods use AdaGrad stepsizes and are adaptive in the sense that they donot require knowing any problem-dependent constants except an estimate of thediameter of the feasible set but nevertheless achieve the best possibleconvergence rates as if they knew the corresponding constants.We demonstrate that AdaGrad stepsizes work in a variety of situationsby proving, in a unified manner, three types of new results.First, we establish efficiency guarantees for our methods in the classicalsetting where the oracle's variance is uniformly bounded.We then show that, under more refined assumptions on the variance,the same methods without any modifications enjoy implicit variancereduction properties allowing us to express their complexity estimates interms of the variance only at the minimizer.Finally, we show how to incorporate explicit SVRG-type variance reduction intoour methods and obtain even faster algorithms.In all three cases, we present both basic and accelerated algorithmsachieving state-of-the-art complexity bounds.As a direct corollary of our results, we obtain universal stochastic gradientmethods for Hölder smooth problems which can be used in all situations.

Live content is unavailable. Log in and register to view live content