Poster
in
Workshop: Algorithmic Fairness through the Lens of Time
Equal Opportunity under Performative Effects
Sophia Gunluk · Dhanya Sridhar · Antonio Gois · Simon Lacoste-Julien
There is a growing interest in automating decision-making by using machine learning (ML) models to estimate scores that can be used to rank candidates. Consider, as an example, loan application decisions. An institution might train an ML model from historical data to predict the probability that a candidate will default on their loan, based on various features in their application. For new applicants, the trained model predicts a score that the institution can use to approve or reject loan applications, or at least rank the applicants for further review.In these ML-based decision-making contexts, the field of algorithmic fairness has developed a number of metrics to assess the disparity faced by different demographic groups, as well as by individuals. However, algorithmic decision-making is often dynamic, with individuals responding to the deployment of ML models and their automated predictions. In the loan example, rejected applicants may adapt their applications to get a better outcome, potentially by gaming the classifier or making changes to become more loan-worthy candidates in the future. These dynamics lead to a feedback loop between the model’s prediction and the true outcome (e.g., defaulting on a loan or not), which is hard to analyze and is often ignored by existing fairness metrics, which assume a static data-generating process. Recently the fields of strategic classification and performative prediction have begun to address this phenomenon in the context of machine learning, but most works ignore potential disparities between different segments of the population. In this emerging line of work, models often do not account for factors that may result in demographic groups adapting differently, which can lead to unfair decisions when applied to the different segments of the population.