Spotlight
in
Workshop: Physical Reasoning and Inductive Biases for the Real World
TorchDyn: Implicit Models and Neural Numerical Methods in PyTorch
Michael Poli · Stefano Massaroli · Atsushi Yamashita · Hajime Asama · Jinkyoo Park · Stefano Ermon
Computation in traditional deep learning models is directly determined by the explicit linking of select primitives e.g. layers or blocks arranged in a computational graph. Implicit neural models follow instead a declarative approach; a desiderata is encoded into constraints and a numerical method is applied to solve the resulting optimization problem as part of the inference pass. Existing open-source frameworks focus on explicit models and do not offer implementations of the numerical routines required to study and benchmark implicit models. We introduce TorchDyn, a PyTorch library fully tailored to implicit learning. TorchDyn primitives are categorized into numerical and sensitivity methods and model classes, with pre-existing implementations that can be combined and repurposed to obtain complex compositional implicit architectures. TorchDyn further offers a collection step-by-step tutorials and benchmarks designed to accelerate research and improve the robustness of experimental evaluations for implicit models.