Oral
in
Workshop: Algorithmic Fairness through the lens of Metrics and Evaluation
Contributed talk: The Search for Less Discriminatory Algorithms: Limits and Opportunities
Benjamin Laufer · Manish Raghavan · Solon Barocas
Sat 14 Dec 9 a.m. PST — 5:30 p.m. PST
In disparate impact doctrine, establishing that a firm's decisions were unlawfully discriminatory rests on the plaintiff's ability to put forward a \textit{less discriminatory alternative} (LDA) policy, that achieves the firm's stated business needs at least as successfully, but that reduces disparities across populations. If such an LDA exists, it serves as evidence that a firm \textit{could have chosen a less discriminatory policy, but didn't}. As firms turn to algorithms, there is increasing interest in understanding how the LDA should be understood and operationalized. This paper puts forward three fundamental (negative) results, which each represent limits to searching for and using LDAs. First, we find that, given an initial classifier, determining whether an LDA exists is computationally intractable (NP-hard) in general. Second, we show that there are bounds on how much you can close the gap in selection rates between groups at any given level of accuracy, given the size of each group and the base rate of the property or outcome of interest in each group. Finally, we observe that firms often design algorithms before accessing particular information about the particular population subjected to an algorithm, so higher performance on a fixed dataset may not mean there's an LDA that generalizes to new populations.Each of these claims are likely to be brought up in court, especially by firms seeking to defend themselves against liability for discrimination. However, these claims only tell part of the story. For each of our negative results limiting what is attainable in this setting, we offer \textit{positive} results demonstrating that there exist effective and low-cost strategies that are remarkably powerful if not perfect. These strategies enable firms to reliably unearth less discriminatory models, that generalize to new populations and meaningfully benefit consumers, including members of the disadvantaged population of interest, when such models exist.