Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Table Representation Learning Workshop (TRL)

PORTAL: Scalable Tabular Foundation Models via Content-Specific Tokenization

Marco Spinaci · Marek Polewczyk · Johannes Hoffart · Markus Kohler · Sam Thelin · Tassilo Klein

Keywords: [ Self-supervised Learning ] [ Tables ] [ Transformer ] [ Tabular data ] [ Foundation model ] [ Pretraining ]


Abstract:

Self-supervised learning on tabular data seeks to apply advances from natural language and image domains to the diverse domain of tables. However, current techniques often struggle with integrating multi-domain data and require data cleaning or specific structural requirements, limiting the scalability of pre-training datasets. We introduce PORTAL (Pretraining One-Row-at-a-Time for All tabLes), a framework that handles various data modalities without the need for cleaning or preprocessing. This simple yet powerful approach can be effectively pre-trained on online-collected datasets and fine-tuned to match state-of-the-art methods on complex classification and regression tasks. This work offers a practical advancement in self-supervised learning for large-scale tabular data.

Chat is not available.