Poster
The Fragility of Fairness: Causal Sensitivity Analysis for Fair Machine Learning
Jake Fawkes · Nic Fishman · Mel Andrews · Zachary Lipton
West Ballroom A-D #5507
Fairness metrics are a core tool in the fair machine learning literature (FairML),used to determine that ML models are, in some sense, “fair.” Real-world data,however, are typically plagued by various measurement biases and other violatedassumptions, which can render fairness assessments meaningless. We adapt toolsfrom causal sensitivity analysis to the FairML context, providing a general frame-work which (1) accommodates effectively any combination of fairness metric andbias that can be posed in the “oblivious setting”; (2) allows researchers to inves-tigate combinations of biases, resulting in non-linear sensitivity; and (3) enablesflexible encoding of domain-specific constraints and assumptions. Employing thisframework, we analyze the sensitivity of the most common parity metrics under 3varieties of classifier across 14 canonical fairness datasets. Our analysis reveals thestriking fragility of fairness assessments to even minor dataset biases. We show thatcausal sensitivity analysis provides a powerful and necessary toolkit for gaugingthe informativeness of parity metric evaluations. Our repository is \href{https://github.com/Jakefawkes/fragile_fair}{available here}.