Poster
in
Workshop: NeurIPS 2024 Workshop: Machine Learning and the Physical Sciences
Training Hamiltonian neural networks without backpropagation
Atamert Rahma · Chinmay Datar · Felix Dietrich
Neural networks that synergistically integrate data and physical laws offer great promise in modeling dynamical systems. However, iterative gradient-based optimization of network parameters is often computationally expensive and suffers from slow convergence. In this work, we present a backpropagation-free algorithm to accelerate the training of neural networks for approximating Hamiltonian systems using data-agnostic and data-driven algorithms. We demonstrate that data-driven sampling of the network parameters is much superior to data-agnostic sampling or the traditional gradient-based iterative optimization of the network parameters when approximating functions with steep gradients or wide input domains. We demonstrate that our approach is more than 20 times faster with CPUs than the traditionally trained Hamiltonian Neural Networks on a GPU using gradient-based iterative optimization and is more than four orders of magnitude accurate in most examples, including the Hénon-Heiles chaotic system.