Poster
in
Workshop: Algorithmic Fairness through the lens of Metrics and Evaluation
What's in a Query: Examining Distribution-based Amortized Fair Ranking
Aparna Balagopalan · Kai Wang · Asia Biega · Marzyeh Ghassemi
Keywords: [ Novel fairness metrics ] [ Metrics and Evaluation ]
Machine learning-driven rankings, where individuals (or items) are ranked in response to a query, mediate search exposure or attention in a variety of safety-critical settings. Thus, it is important to ensure that such rankings are fair. Under the goal of equal opportunity, attention allocated to an individual on a ranking interface should be proportional to their relevance for a given search query. In this work, we examine \emph{amortized} fair ranking -- where relevance and attention are cumulated over a sequence of user queries to make fair ranking more feasible. Unlike prior methods that operate on average attention for each individual across a sequence of queries, we define new distance-based measures for attention distribution-aware fairness in ranking (DistFaiR). This allows us to propose new definitions of unfairness which are more reliable at test time. Second, we prove that group fairness is upper-bounded by individual fairness under this definition for a useful sub-class of distance functions, and experimentally show that maximizing individual fairness through an integer linear programming-based optimization is often beneficial to group fairness.Lastly, we find that prior research in amortized fair ranking ignores critical information about queries, potentially leading to a fairwashing risk in practice by making rankings appear more fair than they actually are.