Poster
in
Workshop: NeurIPS'24 Workshop on Causal Representation Learning
Unsupervised Causal Abstraction
Yuchen Zhu · Sergio Garrido Mejia · Bernhard Schölkopf · Michel Besserve
Causal abstraction aims at mapping a complex causal model into a simpler ("reduced") one. Causal consistency constraints have been established to link the initial "low-level" model to its "high-level" counterpart, and identifiability results for such mapping can be established when we have access to some information about high-level variables. In contrast, we study the problem of learning a causal abstraction in an unsupervised manner, that is, when we do not have any information on the high-level causal model. In such setting, there typically exists multiple causally consistent abstractions, and we need to put additional constraints to unambiguously select a high-level model. To achieve this, we supplement a Kullback-Leibler-divergence-based consistency loss with a projection loss, which aims at finding the causal abstraction that best captures the variations of the low-level variables, thereby eliminating trivial solutions. The projection loss bears similarity to the Principal Component Analysis (PCA) algorithm; in this work it is shown to have a causal interpretation. We experimentally show how the abstraction preferred by the reconstruction loss varies with respect to the causal coefficients.