Poster
in
Workshop: Algorithmic Fairness through the lens of Metrics and Evaluation
Measuring the Impact of Equal Treatment as Blindness via Explanations Disparity
Carlos Mougan · Salvatore Ruggieri · Laura State · Antonio Ferrara · Steffen Staab
Keywords: [ Novel fairness metrics ] [ Interdisciplinary considerations ] [ Metrics ] [ Evaluation Methods and Techniques ]
Sat 14 Dec 9 a.m. PST — 5:30 p.m. PST
Liberal political philosophy advocates for the policy of \emph{equal treatment as blindness}, which seeks to achieve fairness by treating individuals without considering their protected characteristics directly. However, this policy has faced longstanding criticism for perpetuating existing inequalities. In machine learning, this policy can be translated into the concept of \emph{fairness as unawareness}, and be measured using disparate impact metrics such as Demographic Parity (a.k.a. Statistical Parity). Our analysis reveals that Demographic Parity does not faithfully measure whether individuals are being treated independently of the protected attribute by the model. We introduce the Explanation Disparity metric to measure fairness under \emph{equal treatment as blindness} policy. Our metric evaluates the fairness of predictive models by analyzing the extent to which the protected attribute can be inferred from the distribution of explanation values, specifically using Shapley values. The proposed metric tests for statistical independence of the explanation distributions over populations with different protected characteristics. We show the theoretical properties of "Explanation Disparity" and devise an equal treatment inspector based on the AUC of a Classifier Two-Sample Test. We experiment with synthetic and natural data to demonstrate and compare the notion with related ones. We release \texttt{explanationspace}, an open-source Python package with methods and tutorials.