Poster
in
Workshop: Algorithmic Fairness through the lens of Causality and Robustness
Implications of Modeled Beliefs for Algorithmic Fairness in Machine Learning
Ruth Urner · Jeff Edmonds · Karan Singh
Ethics and societal implications of automated decision making have become a major theme in Machine Learning research. Conclusions from theoretical studies in this area are often stated in general terms (such as affirmative action possibly hurting all groups or fairness measures being incompatible with a decision maker being rational). Our work aims to highlight the degree to which such conclusions are in fact relying on modeled beliefs as well as on the technicalities of a chosen framework of analysis (eg. statistical learning theory, game theory, dynamics etc). We carefully discuss prior work through this lens and then highlight the effect of modeled beliefs by means of a simple statistical model where an observed score X is the result of two unobserved hidden variables ("talent" T and "environment" E). We assume that variable T is identically distributed for two subgroups of a population while E models the disparities between an advantaged and a disadvantaged group. We analyze (Bayes-)optimal decision making under a variety of distributional assumptions and show that even the simple model under consideration exhibits some counterintuitive effects.