Poster
in
Workshop: Symmetry and Geometry in Neural Representations
Topological Blindspots: Understanding and Extending Topological Deep Learning Through the Lens of Expressivity
Yam Eitan · Yoav Gelberg · Guy Bar-Shalom · Fabrizio Frasca · Michael Bronstein · Haggai Maron
Keywords: [ Topology ] [ Higher-Order Message-Passing ] [ Expressivity ] [ Topological Deep Learning ] [ GNNs ]
Topological deep learning (TDL) is a rapidly growing field that seeks to leverage topological structure in data and facilitate learning from data supported on topological objects. Most TDL architectures can be unified under the framework of higher-order message-passing (HOMP), which generalizes graph message-passing to higher order domains. In the first part of the paper, we explore HOMP's expressive power from a topological perspective, demonstrating the framework's inability to express fundamental topological and metric invariants such as diameter, orientability, planarity, and homology. In the second part of the paper, we develop two new classes of TDL architectures -- multi-cellular networks (MCN) and scalable MCN (SMCN) -- which draw inspiration from expressive graph architectures. MCN can reach full expressivity, but scaling it to large data objects can be computationally expansive. Therefore, SMCN is designed as a more scalable alternative that still mitigates many of HOMP's expressivity limitations. In the third part of the paper, we design benchmarks for evaluating TDL models on their ability to learn topological properties of complexes. We then evaluate SMCN on these benchmarks as well as real-world graph datasets, demonstrating improvements both over HOMP baselines and expressive graph methods, highlighting the value of expressively leveraging topological information.