Poster
in
Workshop: Algorithmic Fairness through the lens of Metrics and Evaluation
Towards Better Fairness Metrics for Counter-Human Trafficking AI Initiatives
Vidya Sujaya · Pratheeksha Nair · Reihaneh Rabbany
Keywords: [ Interdisciplinary considerations ] [ Case studies ] [ Ethical considerations ] [ Evaluation Metrics and Techniques ]
Several works have demonstrated the potential use of machine learning (ML) in countering Human Trafficking (HT), particularly in analyzing online escort ads for detecting sexual exploitation. Guidelines for this task call for building AI tools that are survivor-centric and least harmful to affected communities. Naturally, this also extends to ensuring the fairness of such anti-HT initiatives. But what does fairness mean in this context? Standard metrics like demographic parity (DP), equal opportunity, and equalized odds (EO) ultimately demand a tool to perform consistently for different sensitive groups. What if the domain itself is inherently biased towards some of these sensitive groups? In this work, we first study existing anti-HT methods through standard fairness frameworks, namely DP and EO, with respect to commonly studied sensitive attributes such as gender and ethnicity. Our initial experiments show an unexpected reflection of the fairness-utility trade-off, and raises questions on the appropriateness of standard fairness frameworks in this domain.These questions tie into the hyper-complexity of the HT domain, and our dilemma impels us to look beyond “one-size-fits-all” fairness metrics, echoing several other studies. We argue that in order to practically evaluate fairness in the HT domain, we need to understand the notions of fairness with respect to the different stakeholders within the HT ecosystem. This requires us, as developers of the AI tools, to engage in continuous consultations/discussions/trustworthy relations with the affected communities and stakeholders.