Skip to yearly menu bar Skip to main content


Poster

A Simple and Optimal Approach for Universal Online Learning with Gradient Variations

Yu-Hu Yan · Peng Zhao · Zhi-Hua Zhou

[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

We investigate the problem of universal online convex optimization with problem-dependent regret. Universal methods aim to achieve provable regret without requiring the curvature information of the online functions in advance. Additionally, we study the problem-dependent gradient-variation regret because it can imply small-loss bounds (another well-known problem-dependent quantity) and is essential in bridging stochastic and adversarial optimization. In this work, we design a universal approach with the optimal gradient-variation regret guarantees simultaneously for strongly convex, exp-concave, and convex functions, thus addressing an open problem highlighted by Yan et al. [2023]. Our approach is simple since it is efficient-to-implement with a two-layer online ensemble structure and only one gradient query within each round, and easy-to-analyze with a novel solution to gradient-variation bounds. Previous works on gradient variations require controlling the algorithmic stability, which is challenging and thus leads to sub-optimal regret and complicated algorithm design. Our analysis in this work overcomes this issue via a novel and alternative solution to gradient-variation bounds.

Live content is unavailable. Log in and register to view live content