Invited Talk
in
Workshop: Federated Learning: Recent Advances and New Challenges
Asynchronous Optimization: Delays, Stability, and the Impact of Data Heterogeneity
Konstantin Mishchenko
In this talk, I will cover the recent advances in the study of asynchronous stochastic gradient descent (SGD). Previously, it was repeatedly stated in theoretical papers that the performance of Asynchronous SGD degrades dramatically when any delay is large, giving the impression that performance depends primarily on the delay. On the contrary, we prove much better guarantees for the same Asynchronous SGD algorithm regardless of the delays in the gradients, depending instead just on the number of parallel devices used to implement the algorithm. Our guarantees are strictly better than the existing analyses, and we also argue that asynchronous SGD outperforms synchronous minibatch SGD in the settings we consider. For our analysis, we introduce a novel recursion based on "virtual iterates" and delay-adaptive stepsizes, which allow us to derive state-of-the-art guarantees for both convex and non-convex objectives.