Poster
in
Workshop: Algorithmic Fairness through the lens of Causality and Robustness
Bounded Fairness Transferability subject to Distribution Shift
Reilly Raab · Yatong Chen · Yang Liu
We study the \emph{transferability of fair predictors} (i.e., classifiers or regressors) assuming domain adaptation. Given a predictor that is “fair” on some \emph{source} distribution (of features and labels), is it still fair on a \emph{realized} distribution that differs? We first generalize common notions of static, statistical group-level fairness to a family of premetric functions that measure “induced disparity.” We quantify domain adaptation by bounding group-specific statistical divergences between the source and realized distributions. Next, we explore cases of simplifying assumptions for which bounds on domain adaptation imply bounds on changes to induced disparity. We provide worked examples for two commonly used fairness definitions (i.e., demographic parity and equalized odds) and models of domain adaptation (i.e., covariate shift and label shift) that prove to be special cases of our general method. Finally, we validate our theoretical results with synthetic data.