Poster
Set-based Neural Network Encoding Without Weight Tying
Bruno Andreis · Bedionita Soro · Philip Torr · Sung Ju Hwang
We propose a neural network weight encoding method for network property prediction that utilizes set-to-set and set-to-vector functionsto efficiently encode neural network parameters. Our approach is capable of encoding neural networks in a modelzoo of mixed architecture and different parameter sizes as opposed to previous approaches that require custom encoding models for different architectures. Furthermore, our \textbf{S}et-based \textbf{N}eural network \textbf{E}ncoder (SNE) takes into consideration the hierarchical computational structure of neural networks by utilizing a layer-wise encoding scheme that culminates to encoding all layer-wise encodings to obtain the neural network encoding vector. To respect symmetries inherent in network weight space, we utilize Logit Invariance to learn the required minimal invariance properties. Additionally, we introduce a \textit{pad-chunk-encode}pipeline to efficiently encode neural network layers that is adjustable to computational and memory constraints. We also introduce two new tasks for neural network property prediction: cross-dataset and cross-architecture. In cross-dataset property prediction, we evaluate how well property predictors generalize across modelzoos trained on different datasets but of the same architecture. In cross-architecture property prediction, we evaluate how well property predictors transfer to modelzoos of different architecture not seen during training.We show that SNE outperforms the relevant baselines on standard benchmarks.
Live content is unavailable. Log in and register to view live content