Poster
in
Workshop: Causality and Large Models
Counterfactual Causal Inference in Natural Language with Large Language Models
Gaël Gendron · Jože Rožanec · Michael Witbrock · Gillian Dobbie
Keywords: [ Causal structure discovery ] [ Counterfactual inference ] [ End-to-end ] [ Large language models ]
Causal structure discovery methods are commonly applied to structured data where the causal variables are known and where statistical testing can be used to assess the causal relationships. By contrast, recovering a causal structure from unstructured natural language data such as news articles contains numerous challenges due to the absence of known variables or counterfactual data to estimate the causal links. Large Language Models (LLMs) have shown promising results in this direction but also exhibit limitations. This work investigates LLM's abilities to build causal graphs from text documents and perform counterfactual causal inference. We propose an end-to-end causal structure discovery and causal inference method from natural language: we first use an LLM to extract the instantiated causal variables from text data and build a causal graph. We merge causal graphs from multiple data sources to represent the most exhaustive set of causes possible. We then conduct counterfactual inference on the estimated graph. The causal graph conditioning allows reduction of LLM biases and better represents the causal estimands. We demonstrate the applicability of our method on real-world news articles.