Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Algorithmic Fairness through the lens of Metrics and Evaluation

On Optimal Subgroups for Group Distributionally Robust Optimisation

Anissa Alloula · Daniel McGowan · Bartlomiej W. Papiez

Keywords: [ Data collection and curation ] [ Bias Mitigation ] [ Algorithm Development ] [ Fairness ]


Abstract:

Defining groups in the data on which we assume there might be a bias is a crucial step of most bias identification and mitigation methods. However, very limited attention has been placed on how to select these groups, and most current research simply relies on simplistic or coarse subgroup definitions that are most frequently available, such as binary gender or race categories. Preliminary experiments investigating different groups (eg binary, noisy, coarse, random, imperfect subgroups etc.) in group distributionnally robust optimisation (gDRO) in a toy image classification scenario reveal that optimal bias mitigation is highly dependent on subgroup definition. Indeed, gDRO completely fails when subgroups mix the correlated and non-correlated samples, while with appropriate grouping, it is able to substantially enhance fairness (and slightly improve overall performance) compared to a traditional model. By improving subgroup characterisation, we can unlock the full potential of bias mitigation methods and increase their effectiveness across a wider range of applications.

Chat is not available.