Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: AI4Mat-2024: NeurIPS 2024 Workshop on AI for Accelerated Materials Design

Deconstructing equivariant representations in molecular systems

Kin Long Kelvin Lee · Michael Galkin · Santiago Miret

Keywords: [ graph neural networks ] [ equivariance ] [ representation learning ] [ qm9 ] [ e3nn ]

[ ] [ Project Page ]
Sat 14 Dec 9:21 a.m. PST — 9:33 a.m. PST
 
presentation: AI4Mat-2024: NeurIPS 2024 Workshop on AI for Accelerated Materials Design
Sat 14 Dec 8:15 a.m. PST — 5:20 p.m. PST

Abstract: Recent equivariant models have shown significant progress in not just chemical property prediction, but as surrogates for dynamical simulations of molecules and materials. Many of the top performing models in this category are built within the framework of tensor products, which preserves equivariance by restricting interactions and transformations to those that are allowed by symmetry selection rules. Despite being a core part of the modeling process, there has not yet been much attention into understanding what information persists in these equivariant representations, and their general behavior outside of benchmark metrics. In this work, we report on a set of experiments using a simple equivariant graph convolution model on the QM9 dataset, focusing on correlating quantitative performance with the resulting molecular graph embeddings. Our key finding is that, for a scalar prediction task, many of the irreducible representations are simply ignored during training - specifically those pertaining to vector ($l=1$) and tensor quantities ($l=2$) - an issue that does not necessarily make itself evident in the test metric. We empirically show that removing some unused orders of spherical harmonics improves model performance, correlating with improved latent space structure. We provide a number of recommendations for future experiments to try and improve efficiency and utilization of equivariant features based on these observations.

Chat is not available.