Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Optimization for ML Workshop

Connections between Schedule-Free SGD, Accelerated SGD Variants, and Weight Averaging

Depen Morwani · Nikhil Vyas · Hanlin Zhang · Sham Kakade


Abstract:

In this work, we uncover precise connections between the recently proposed optimizers such as schedule-free SGD, Lion and the literature on accelerated SGD variants. We show that schedule-free SGD can be precisely understood as accelerated SGD combined with weight averaging. The primary idea behind all these optimizers is decoupling the momentum coefficient from the weight on the gradient in the current step. We provide experimental results on a 150m decoder-only language model supporting our claims by demonstrating that ScheduleFreeAdamW is close in performance to Adam combined with accelerated SGD and weight averaging.

Chat is not available.