Skip to yearly menu bar Skip to main content


Poster

Two-way Deconfounder for Off-policy Evaluation under Unmeasured Confounding

Shuguang Yu · Shuxing Fang · Ruixin Peng · Zhengling Qi · Fan Zhou · Chengchun Shi

[ ]
Wed 11 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

This paper studies off-policy evaluation (OPE) in the presence of unmeasured confounders. Inspired by the two-way fixed effects regression model widely used in the panel data literature, we propose a two-way unmeasured confounding assumption to model the system dynamics in causal reinforcement learning and develop a two-way deconfounder -- an algorithm that leverages the temporal and individual dependence among the latent factors to achieve consistent policy value estimation. Two-way deconfounder devises a relational neural network to simultaneously learn both the unmeasured confounders and the system dynamics, based on which a model-based estimator can be constructed to estimate the policy value. We illustrate the effectiveness of the proposed estimator through a combination of theoretical results and numerical experiments.

Live content is unavailable. Log in and register to view live content