Skip to yearly menu bar Skip to main content


Poster

The Implicit Bias of Adam on Separable Data

Chenyang Zhang · Difan Zou · Yuan Cao

[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract: Adam has become one of the most favored optimizer in deep learning problems. Despite its success in practice, numerous mysteries persist regarding its theoretical understanding. In this paper, we study the implicit bias of Adam in linear logistic regression. Specifically, we show that when the training data are linearly separable, Adam converges towards a linear classifier that achieves the maximum $\ell_\infty$-margin. Notably, for a general class of diminishing learning rates, this convergence occurs within polynomial time. Our result shed light on the difference between Adam and (stochastic) gradient descent from a theoretical perspective.

Live content is unavailable. Log in and register to view live content