Skip to yearly menu bar Skip to main content


Poster
in
Workshop: NeurIPS'24 Workshop on Causal Representation Learning

Robust Domain Generalisation with Causal Invariant Bayesian Neural Networks

GaĆ«l Gendron · Michael Witbrock · Gillian Dobbie


Abstract:

Deep neural networks can obtain impressive performance on various tasks under the assumption that their training domain is identical to their target domain. Performance can drop dramatically when this assumption does not hold. One explanation for this discrepancy is the presence of spurious domain-specific correlations in the training data that the network exploits. Causal mechanisms, in the other hand, can be made invariant under distribution changes using transportability theory as it allows disentangling the domain-specific and stable factors underlying the data generation. Yet, learning transportable causal mechanisms to improve out-of-distribution generalisation in deep neural networks remains an under-explored area. We propose a Bayesian neural architecture that disentangles the learning of the data distribution from the inference process mechanisms. We show theoretically and experimentally that our model approximates reasoning under causal interventions. We demonstrate the performance of our method, outperforming point estimate-counterparts, on out-of-distribution image recognition tasks where the data distribution acts as strong adversarial confounders.

Chat is not available.