Poster
in
Workshop: NeurIPS'24 Workshop on Causal Representation Learning
Zero-Shot Learning of Causal Models
Divyat Mahajan · Jannes Gladrow · Agrin Hilmkil · Cheng Zhang · Meyer Scetbon
With the increasing acquisition of datasets over time, we now have access to precise and varied descriptions of the world, capturing all sorts of phenomena. These datasets can be seen as empirical observations of unknown causal generative processes, or Structural Causal Models (SCMs). Recovering these causal generative processes from observations poses formidable challenges, and often require to learn a specific generative model for each dataset. In this work, we propose to learn a single model capable of inferring in a zero-shot manner the causal generative processes of datasets. Rather than learning a specific SCM for each dataset, we enable FiP, the architecture proposed by Scetbon et al. to infer the generative SCMs conditionally on their empirical representations, termed as cond-Fip. We show that cond-FiP is capable of predicting in zero-shot the true generative SCMs, and as a by-product, of (i) generating new dataset samples, and (ii) inferring intervened ones. Our experiments demonstrate that cond-FiP achieves performances on par with SoTA methods trained specifically for each dataset on both in and out-of-distribution problems.