Skip to yearly menu bar Skip to main content


Poster

A Convergence Analysis of Gradient Descent on Graph Neural Networks

Pranjal Awasthi · Abhimanyu Das · Sreenivas Gollapudi

Virtual

Keywords: [ Optimization ] [ Graph Learning ] [ Deep Learning ] [ Theory ]


Abstract: Graph Neural Networks~(GNNs) are a powerful class of architectures for solving learning problems on graphs. While many variants of GNNs have been proposed in the literature and have achieved strong empirical performance, their theoretical properties are less well understood. In this work we study the convergence properties of the gradient descent algorithm when used to train GNNs. In particular, we consider the realizable setting where the data is generated from a network with unknown weights and our goal is to study conditions under which gradient descent on a GNN architecture can recover near optimal solutions. While such analysis has been performed in recent years for other architectures such as fully connected feed-forward networks, the message passing nature of the updates in a GNN poses a new challenge in understanding the nature of the gradient descent updates. We take a step towards overcoming this by proving that for the case of deep linear GNNs gradient descent provably recovers solutions up to error $\epsilon$ in $O(\text{log}(1/\epsilon))$ iterations, under natural assumptions on the data distribution. Furthermore, for the case of one-round GNNs with ReLU activations, we show that gradient descent provably recovers solutions up to error $\epsilon$ in $O(\frac{1}{\epsilon^2} \log(\frac{1}{\epsilon}))$ iterations.

Chat is not available.