Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Causality and Large Models

Causal Order: The Key to Leveraging Imperfect Experts in Causal Inference

Aniket Vashishtha · Abbavaram Gowtham Reddy · Abhinav Kumar · Saketh Bachu · Vineeth N Balasubramanian · Amit Sharma

Keywords: [ causal effect ] [ Causal Order ] [ Imperfect Experts ] [ large language model ]

[ ] [ Project Page ]
Sat 14 Dec 11:15 a.m. PST — 11:30 a.m. PST
 
presentation: Causality and Large Models
Sat 14 Dec 8:45 a.m. PST — 5:30 p.m. PST

Abstract:

Large Language Models (LLMs) have recently been used to infer causal graphs, often by repeatedly applying a pairwise prompt that asks about causal relationship of each variable pair. However, we identify a key limitation of using graphs as the output interface for the domain knowledge provided by LLMs and other imperfect experts. Even perfect experts cannot distinguish between direct and indirect edges given a pairwise prompt, leading to unnecessary errors. Instead, we propose \textit{causal order} as a more stable output interface that experts should be evaluated on. Causal order is also a useful structure by itself; we show, both theoretically and empirically, that causal order better correlates with effect estimation error than commonly used graph metrics. We propose a triplet-based prompting method that considers three variables at a time rather than a pair of variables. For both LLMs and human annotators as experts, the proposed triplet method leads to more accurate causal order with significantly fewer cycles. We also show how the estimated causal order can be used to reduce error in downstream discovery and effect inference.

Chat is not available.