Skip to yearly menu bar Skip to main content


Poster

On the Expressive Power of Deep Polynomial Neural Networks

Joe Kileel · Matthew Trager · Joan Bruna

East Exhibition Hall B, C #244

Keywords: [ Theory ] [ Spaces of Functions and Kernels ] [ Deep Learning -> Optimization for Deep Networks; Theory -> Computational Complexity; Theory ] [ Hardness of Learning and Approxi ]


Abstract:

We study deep neural networks with polynomial activations, particularly their expressive power. For a fixed architecture and activation degree, a polynomial neural network defines an algebraic map from weights to polynomials. The image of this map is the functional space associated to the network, and it is an irreducible algebraic variety upon taking closure. This paper proposes the dimension of this variety as a precise measure of the expressive power of polynomial neural networks. We obtain several theoretical results regarding this dimension as a function of architecture, including an exact formula for high activation degrees, as well as upper and lower bounds on layer widths in order for deep polynomials networks to fill the ambient functional space. We also present computational evidence that it is profitable in terms of expressiveness for layer widths to increase monotonically and then decrease monotonically. Finally, we link our study to favorable optimization properties when training weights, and we draw intriguing connections with tensor and polynomial decompositions.

Live content is unavailable. Log in and register to view live content