Poster
in
Workshop: The Symbiosis of Deep Learning and Differential Equations -- III
Towards Optimal Network Depths: Control-Inspired Acceleration of Training and Inference in Neural ODEs
Keyan Miao · Konstantinos Gatsis
Keywords: [ optimal control ] [ convergence speed ] [ temporal optimization ] [ minimum-time control ] [ Lyapunov ] [ network depth ] [ Neural ODEs ]
Neural Ordinary Differential Equations (ODEs) offer potential for learning continuous dynamics, but their slow training and inference limit broader use. This paper proposes spatial and temporal optimization inspired by control theory. It seeks an optimal network depth to accelerate both training and inference while maintaining performance. Two approaches are presented: one treats training as a single-stage minimum-time optimal control problem, adjusting terminal time, and the other combines pre-training with Lyapunov method, followed by safe terminal time updates in a secondary stage. Experiments confirm the effectiveness of addressing Neural ODEs' speed limitations.