Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Symmetry and Geometry in Neural Representations

Invariant Graphon Networks: Approximation and Cut Distance

Daniel Herbst · Stefanie Jegelka

Keywords: [ transferability ] [ graphons ] [ machine learning theory. ] [ invariant graph networks ] [ homomorphism densities ] [ Graph neural networks ] [ graph limits ] [ universal approximation ]


Abstract:

Graph limit models, like graphons for limits of dense graphs, have recently been used to study size transferability of graph neural networks (GNNs). While most existing literature focuses on message passing GNNs (MPGNNs), we attend to Invariant Graph Networks (IGNs), a powerful alternative GNN architecture (Maron et al., 2018). We generalize IGNs to graphons, introducing Invariant Graphon Networks (IWNs), and restrict our view to a subset of the IGN basis that induces bounded linear operators. This allows us to obtain universal approximation results for graphon-signals through a novel extension of homomorphism densities. In contrast to the work of Cai and Wang (2022), our results reveal stronger expressivity and better align with graphon space geometry. We also highlight that unlike other universal GNN architectures as higher-order MPGNNs, IWNs are discontinuous w.r.t. cut distance and, thus, have generally less favorable transferability properties.

Chat is not available.