Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Mathematics of Modern Machine Learning (M3L)

Increasing Fairness via Combination with Learning Guarantees

Yijun Bian · Kun Zhang

Keywords: [ weighted vote ] [ Fairness ] [ machine learning ] [ learning bound ]


Abstract:

The concern about hidden discrimination in ML models is growing, as their widespread real-world application increasingly impacts human lives. Various techniques, including commonly-used group fairness measures and several fairness-aware ensemble-based methods, have been developed to enhance fairness. However, existing fairness measures typically focus on only one aspect---either group or individual fairness, and the hard compatibility among them indicates a possibility of remaining biases even when one of them is satisfied. Moreover, existing mechanisms to boost fairness usually present empirical results to show validity, yet few of them discuss whether fairness can be boosted with certain theoretical guarantees. To address these issues, we propose a fairness quality measure named `discriminative risk (DR)' to reflect both individual and group fairness aspects. Furthermore, we investigate its properties and establish the first- and second-order oracle bounds to show that fairness can be boosted via ensemble combination with theoretical learning guarantees. The analysis is suitable for both binary and multi-class classification. Comprehensive experiments are conducted to evaluate the effectiveness of the proposed methods.

Chat is not available.