Poster
in
Workshop: Mathematics of Modern Machine Learning (M3L)
Does Machine Bring in Extra Bias in Learning? Approximating Discrimination Within Models Quickly
Yijun Bian · Yujie Luo · Ping Xu
Keywords: [ Fairness measure ] [ multi-attribute protection ] [ machine learning ]
Discrimination mitigation with machine learning (ML) models is complicated because multiple factors may interweave with each other including hierarchically and historically. Yet few existing fairness measures are able to capture the discrimination level within ML models in the face of multiple sensitive attributes. To bridge this gap, we propose a fairness measure based on distances between sets from a manifold perspective, named `harmonic fairness measure via manifolds (HFM)' with three optional versions, which can deal with a fine-grained discrimination evaluation for several sensitive attributes of binary/multiple values. To accelerate the computation of distances of sets, we further propose approximation algorithms to resolve bias evaluation. The empirical results demonstrate that our proposed fairness measure HFM is valid and the approximation algorithms are effective and efficient.