Skip to yearly menu bar Skip to main content


Poster

Graph neural networks and non-commuting operators

Mauricio Velasco · Kaiying O'Hare · Bernardo Rychtenberg · Soledad Villar

[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Graph neural networks (GNNs) provide state-of-the-art results in a wide variety of tasks which typically involve predicting features at the nodes of a graph. They are built from layers of graph convolutions which serve as a powerful inductive bias for describing the flow of information among the vertices. Often, more than one data modality is available. This work considers a setting in which several graphs have the same vertex set and a common node-level learning task. This generalizes standard GNN models to GNNs with several graph operators that do not commute. We may call this model graph-tuple neural networks (GtNN). In this work, we develop the mathematical theory to address the stability and transferability of GtNNs using properties of non-commuting non-expansive operators. We develop a limit theory of graphon-tuple neural networks and use it to prove a universal transferability theorem that guarantees that all graph-tuple neural networks are transferable on convergent graph-tuple sequences. In particular, there is no non-transferable energy under the convergence we consider here. Our theoretical results extend well-known transferability theorems for GNNs to the case of several simultaneous graphs (GtNNs) and provide a strict improvement on what is currently known even in the GNN case.We illustrate our theoretical results with simple experiments on synthetic data. To this end, we derive a training procedure that provably enforces the stability of the resulting model.

Live content is unavailable. Log in and register to view live content