Poster
in
Workshop: D3S3: Data-driven and Differentiable Simulations, Surrogates, and Solvers
Gradient of Clifford Neural Networks
Takashi Maruyama · Francesco Alesiani
Keywords: [ Geometric deep learning ] [ Physics simulation ] [ Geometric algebra ] [ Gradient ] [ Clifford algebra ]
Recent advances in the study of physical systems are supported by geometric deep learning that incorporates geometric priors present in physical systems. One class of such architectures, tailored to capture the geometrical interactions between physical features, is the Clifford neural network, which is based on the concept of Clifford Algebra. While Clifford neural networks have made promising progress in a variety of tasks, their usage is still underexplored in tasks that rely on {\it inverse mode} of the networks, which at inference time requires the gradient of the networks with respect to their input. Here we show, using the notion of the Euclidean scalar product, that the gradient of functions between Clifford algebras on Euclidean spaces induces the canonical gradient of the functions restricted to the base vector spaces. This ensures that the gradient of Clifford neural networks coincides with that obtained through widely adopted automatic differentiation modules such as Autograd. We empirically evaluate the benefit of the gradient of Clifford neural networks for some important classes of problems such as inverse design and sampling from distributions in scientific discovery.