Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Optimization for ML Workshop

Improving Deep Learning Speed and Performance through Synaptic Neural Balance

Antonios Alexos · ian domingo · Pierre Baldi


Abstract: We present experiments and their corresponding theory, demonstrating that synaptic neural balancing can significantly enhance deep learning speed, accuracy, and generalization. Given and additive cost function (regularizer) of the synaptic weights, a neuron is said to be in balance if the total cost of it incoming weights is equal to the total cost of its outgoing weights. For large classes of networks, activation functions, and regularizers, neurons can be balanced fully or partially using scaling operations that do not change their functionality. Furthermore, these balancing operations are associated with a strictly convex optimization problem with a single optimum and can be carried in any order. In our simulations, we systematically observe that: (1) Fully balancing before training results in better performance as compared to several other training approaches; (2) Interleaving partial (layer-wise) balancing and stochastic gradient descent steps during training results in faster learning convergence and better overall accuracy (with $L_1 $ balancing converging faster than $L_2$ balancing; and (3) When given limited training data, neural balanced models outperform plain or regularized models. and this is true both for both feedforward and recurrent networks. In short, the evidence supports that neural balancing operations ought to be added to the arsenal of methods used to regularize and train neural networks and further work as an effective optimization method.

Chat is not available.