Skip to yearly menu bar Skip to main content


Poster

Matching the Statistical Query Lower Bound for $k$-Sparse Parity Problems with Stochastic Gradient Descent

Yiwen Kou · Zixiang Chen · Quanquan Gu · Sham Kakade

[ ]
Wed 11 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract: The $k$-sparse parity problem is a classical problem in computational complexity and algorithmic theory, serving as a key benchmark for understanding computational classes. In this paper, we solve the $k$-parity problem with stochastic gradient descent (SGD) on two-layer fully-connected neural networks. We demonstrate that SGD can efficiently solve the $k$-sparse parity problem on a $d$-dimensional hypercube ($k\le O(\sqrt{d})$) with a sample complexity of $\tilde{O}(d^{k-1})$ using $2^{\Theta(k)}$ neurons, thus matching the established $\Omega(d^{k})$ lower bounds of Statistical Query (SQ) models\footnote{The $\tilde{O}(d^{k-1})$ sample complexity implies $\tilde{O}(d^{k})$ query complexity, because each sample corresponds to $d$ scalar-valued queries.}. Our theoretical analysis begins by constructing a good neural network capable of correctly solving the $k$-parity problem. We then demonstrate how a trained neural network with SGD can effectively approximate this good network, solving the $k$-parity problem with small statistical errors. To the best of our knowledge, this is the first result of learning $k$-sparse parity problem that matches the SQ lower bound.

Live content is unavailable. Log in and register to view live content