Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: Algorithmic Fairness through the lens of Metrics and Evaluation

Multi-Output Distributional Fairness via Post-Processing

Gang Li · Qihang Lin · Ayush Ghosh · Tianbao Yang

Keywords: [ Bias Mitigation ] [ Algorithm Development ] [ Metrics ] [ General Fairness ]

[ ]
Sat 14 Dec 5:27 p.m. PST — 5:30 p.m. PST
 
presentation: Algorithmic Fairness through the lens of Metrics and Evaluation
Sat 14 Dec 9 a.m. PST — 5:30 p.m. PST

Abstract:

The post-processing approaches are becoming prominent techniques to enhance machine learning models' fairness because of their intuitiveness, low computational cost, and excellent scalability. However, most existing post-processing methods are designed for task-specific fairness measures and are limited to single-output models. In this paper, we introduce a post-processing method for multi-output models, such as the ones used for multi-task/multi-class classification and representation learning, to enhance a model's distributional parity, a task-agnostic fairness measure. Existing techniques to achieve distributional parity are based on the (inverse) cumulative density function of a model's output, which is limited to single-output models. Extending previous works, our method employs an optimal transport mapping to move a model's outputs across different groups towards their empirical Wasserstein barycenter. An approximation technique is applied to reduce the complexity of computing the exact barycenter and a kernel regression method is proposed for extending this process to out-of-sample data. Our empirical studies, which compare our method to current existing post-processing baselines on multi-task/multi-class classification and representation learning tasks, demonstrate the effectiveness of the proposed approach.

Chat is not available.