Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Algorithmic Fairness through the lens of Metrics and Evaluation

Different Horses for Different Courses: Comparing Bias Mitigation Algorithms in ML

Prakhar Ganesh · Usman Gohar · Lu Cheng · Golnoosh Farnadi

Keywords: [ Evaluation Methods and Techniques ] [ Bias Mitigation ] [ Metrics and Evaluation ]

[ ]
[ Poster
 
presentation: Algorithmic Fairness through the lens of Metrics and Evaluation
Sat 14 Dec 9 a.m. PST — 5:30 p.m. PST

Abstract:

With fairness concerns gaining significant attention in machine learning (ML), several bias mitigation techniques have been proposed, often compared against each other in an attempt to find the best method. These benchmarking efforts tend to use a common setup for evaluation under the assumption that providing a uniform environment ensures a fair comparison. However, bias mitigation techniques are highly sensitive to hyperparameter choices, random seeds, feature selection, etc., meaning that comparison on just one setting can unfairly favour certain algorithms. In this work, we show significant variance in fairness achieved by several bias mitigation algorithms and the influence of the learning pipeline on fairness scores. We then highlight that most bias mitigation techniques can achieve comparable performance, given the freedom to perform hyperparameter optimization, suggesting that the choice of the evaluation parameters—rather than the mitigation technique itself—can sometimes create the perceived superiority of one method over another. We hope that our work encourages future research on a deeper understanding of how various choices in the lifecycle of developing an algorithm impact fairness, and trends that guide the selection of appropriate algorithms.

Chat is not available.