Poster
Finite Sample Convergence Rates of Zero-Order Stochastic Optimization Methods
John Duchi · Michael Jordan · Martin J Wainwright · Andre Wibisono
Harrah’s Special Events Center 2nd Floor
[
Abstract
]
Abstract:
We consider derivative-free algorithms for stochastic optimization problems that use only noisy function values rather than gradients, analyzing their finite-sample convergence rates. We show that if pairs of function values are available, algorithms that use gradient estimates based on random perturbations suffer a factor of at most $\sqrt{\dim}$ in convergence rate over traditional stochastic gradient methods, where $\dim$ is the dimension of the problem. We complement our algorithmic development with information-theoretic lower bounds on the minimax convergence rate of such problems, which show that our bounds are sharp with respect to all problem-dependent quantities: they cannot be improved by more than constant factors.
Live content is unavailable. Log in and register to view live content