Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Algorithmic Fairness through the lens of Metrics and Evaluation

Exploring AUC-like metrics to propose threshold-independent fairness evaluation

Daniel Gratti · Thalita Veronese · Marcos M. Raimundo

Keywords: [ Novel fairness metrics ] [ Metrics ]


Abstract:

Credit scoring, spam, and fraud detection are some applications in binary classification. Depending on the economy, user preference, and environmental changes, model operators usually tune the decision threshold to accommodate distinct demands. In those cases, mainly in credit scoring, it is usual to train the model using a threshold-independent evaluation metric, such as the Area Under the Receiver Operating Characteristic (AUC-ROC). Common fairness metrics are inappropriate in these scenarios because they only evaluate the classifier in a single decision threshold, not guaranteeing such fairness performance for other threshold/preferences. This paper explores a formalization of AUC-like metrics to propose a series of threshold-independent metrics for fairness. Our statistical and theoretical evaluation showed that formalizing AUC-like metrics is equivalent to traditional AUC and newly proposed fairness metrics.

Chat is not available.