Poster
Efficient Convex Relaxations for Streaming PCA
Raman Arora · Teodor Vanislavov Marinov
East Exhibition Hall B, C #57
Keywords: [ Algorithms ] [ Large Scale Learning ] [ Stochastic Optimization ] [ Optimization ]
[
Abstract
]
Abstract:
We revisit two algorithms, matrix stochastic gradient (MSG) and $\ell_2$-regularized MSG (RMSG), that are instances of stochastic gradient descent (SGD) on a convex relaxation to principal component analysis (PCA). These algorithms have been shown to outperform Oja’s algorithm, empirically, in terms of the iteration complexity, and to have runtime comparable with Oja’s. However, these findings are not supported by existing theoretical results. While the iteration complexity bound for $\ell_2$-RMSG was recently shown to match that of Oja’s algorithm, its theoretical efficiency was left as an open problem. In this work, we give improved bounds on per iteration cost of mini-batched variants of both MSG and $\ell_2$-RMSG and arrive at an algorithm with total computational complexity matching that of Oja's algorithm.
Live content is unavailable. Log in and register to view live content