Poster
in
Workshop: NeurIPS'24 Workshop on Causal Representation Learning
Minimally orthogonal causal effect estimation
Yiman Ren · Daniel Clauw · Michael Burns · Maggie Makar
Abstract:
Estimating the causal effect of an intervention often involves the estimation of nuisance functions (the propensity score and outcome models) in addition to the target causal effect model. Double/Debiased ML (DML) enables the use of flexible ML models to estimate the nuisance functions, while maintaining quasi-oracle convergence rates. The DML framework is appealing partly because it requires the estimated nuisance functions to converge to their true values at a slow rate; any rate faster than $\mathcal{O}(n^{-1/4})$. Unfortunately, this seemingly mild assumption is often violated in practice, e.g., in high dimensional settings. In this work, we relax this assumption and show that oracle rates are still achievable if the nuisance functions' prediction errors satisfy a certain orthogonality property. We propose an estimation method that enforces this condition by incorporating an invariance penalty. We show that our approach can achieve oracle convergence rates with respect to the target estimand even if the nuisance functions converge at a rate slower than $\mathcal{O}(n^{-1/4})$.We validate our findings empirically using fully- and semi-simulated data as well as real data.
Chat is not available.