Skip to yearly menu bar Skip to main content


Talk
in
Workshop: NeurIPS'24 Workshop on Causal Representation Learning

Towards Causal Foundation Model (Invited Talk by Cheng Zhang)

Cheng Zhang

[ ]
Sun 15 Dec 10 a.m. PST — 10:30 a.m. PST

Abstract:

Foundation models have brought changes to the landscape of machine learning, demonstrating sparks of human-level intelligence across a diverse array of tasks. However, it is unclear whether they are capable of answering causal questions that are fundamental for our understanding of the world and making decisions. In this talk, I will present our current insights on the capabilities of existing foundation models in causal reasoning. We will discuss multiple ways to explore towards causal foundation models and some steps we have taken in that direction. In particular, I will dive into one of our works: Causal Inference with Attention (CInA), which utilizes multiple unlabeled datasets to perform self-supervised causal learning. This enables zero-shot causal inference on unseen tasks with new data, based on our theoretical results that demonstrate the primal-dual connection between optimal covariate balancing and self-attention. We demonstrate empirically that our approach, CInA, effectively generalizes to out-of-distribution datasets and various real-world datasets, matching or even surpassing traditional per-dataset causal inference methodologies.

Chat is not available.