Poster
in
Workshop: Regulatable ML: Towards Bridging the Gaps between Machine Learning Research and Regulations
Outliers Exist: What Happens if You are a Data-Driven Exception?
Sarah Cen · Manish Raghavan
Data-driven tools are increasingly used to make consequential decisions. In recent years, they have begun to advise employers on which job applicants to interview, judges on which defendants to grant bail, lenders on which homeowners to give loans, and more. In such settings, different data-driven rules result in different decisions. The problem is, for every data-driven rule, there are exceptions. While a data-driven rule may be appropriate for some, it may not be appropriate for all. In this piece, we argue that existing frameworks do not fully encompass this view. As a result, individuals are often, through no fault of their own, made to bear the burden of being data-driven exceptions. We discuss how data-driven exceptions arise and provide a framework for understanding how we can relieve the burden on data-driven exceptions. Our framework requires balancing three considerations: individualization, uncertainty, and harm. Importantly, no single consideration trumps the rest. We emphasize the importance of uncertainty, advocating that decision-makers should utilize data-driven recommendations only if the levels of individualization and certainty are high enough to justify the potential harm resulting from those recommendations. We argue that data-driven decision-makers have a duty to consider the three components of our framework before making a decision, and connect these three components to existing methods.