Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Table Representation Learning Workshop (TRL)

MotherNet: Fast Training and Inference via Hyper-Network Transformers

Andreas Mueller · Carlo Curino · Raghu Ramakrishnan

Keywords: [ meta-learning ] [ automl ] [ neural networks ] [ hyper-networks ]

[ ] [ Project Page ]
Sat 14 Dec 9:20 a.m. PST — 9:30 a.m. PST

Abstract:

Foundation models are transforming machine learning across many modalities, with in-context learning replacing classical model training. Recent work on tabular data hints at a similar opportunity to build foundation models for classification for numerical data. However, existing meta-learning approaches can not compete with tree-based methods in terms of inference time. In this paper, we propose MotherNet, a hypernetwork architecture trained on synthetic classification tasks that, once prompted with a never-seen-before training set generates the weights of a trained child neural-network by in-context learning using a single forward pass. In contrast to most existing hypernetworks that are usually trained for relatively constrained multi-task settings, \MotherNet can create models for multiclass classification on arbitrary tabular datasets without any dataset specific gradient descent.The child network generated by MotherNet outperforms neural networks trained using gradient descent on small datasets, and is competitive with predictions by TabPFN and standard ML methods like Gradient Boosting. Unlike a direct application of TabPFN, MotherNet generated networks are highly efficient at inference time.

Chat is not available.