Skip to yearly menu bar Skip to main content


Plenary Speaker
in
Workshop: Optimization for ML Workshop

Online Learning Guided Quasi-Newton Methods: Improved Global Non-asymptotic Guarantees, Aryan Mokhtari

Aryan Mokhtari

[ ]
Sun 15 Dec 4 p.m. PST — 4:30 p.m. PST

Abstract:

Title: Online Learning Guided Quasi-Newton Methods: Improved Global Non-asymptotic Guarantees

Abstract: Quasi-Newton (QN) methods are popular iterative algorithms known for their superior practical performance compared to Gradient Descent (GD)-type methods. However, the existing theoretical results for this class of algorithms do not sufficiently justify their advantage over GD-type methods. In this talk, we discuss our recent efforts to address this issue. Specifically, in the strongly convex setting, we propose the first “globally” convergent QN method that achieves an explicit “non-asymptotic superlinear” rate. We show that the rate presented for our method is provably faster than GD after at most O(d) iteration, where d is the problem dimension. Additionally, in the convex setting, we present an accelerated variant of our proposed method that provably outperforms the accelerated gradient method and converges at a rate of O(\min{1/k^2, \sqrt{d \log k }/ k^{2.5}}), where k is the number of iterations. To attain these results, we diverge from conventional approaches and construct our QN methods based on the Hybrid Proximal Extragradient (HPE) framework and its accelerated variants. Furthermore, a pivotal algorithmic concept underpinning our methodologies is an online learning framework for updating the Hessian approximation matrices. Specifically, we relate our method's convergence rate to the regret of a specific online convex optimization problem in the matrix space and choose the sequence of Hessian approximation matrices to minimize its overall regret.

Chat is not available.