Skip to yearly menu bar Skip to main content


Poster
in
Affinity Event: Queer in AI

Mitigating Bias in Queer Representation within Large Language Models: A Collaborative Agent Approach

Tianyi Huang · Arya Somasundaram

Keywords: [ Inclusivity ] [ LGBTQIA+ ] [ Queer ] [ Large Language Models ] [ Multi-Agent Systems ] [ Bias ]


Abstract: Large Language Models (LLMs) have revolutionized natural language processing but often perpetuate societal biases, notably those affecting the LGBTQIA+ community. These biases lead to misrepresentation and marginalization, reinforcing discrimination and queer erasure in AI-generated content. To address this critical issue, a collaborative agent pipeline is introduced, designed to detect and mitigate biases in LLM outputs, focusing specifically on pronoun inclusivity to accurately represent all gender identities. This study presents a multi-agent framework that utilizes specialized agents that sequentially analyze, critique, and optimize language outputs for enhanced inclusivity. Evaluations using the Tango dataset—a benchmark focused on gender pronoun usage—demonstrate that this approach improves inclusive pronoun classification by 11.2 percentage points over the baseline GPT-4 model, achieving statistical significance ($\chi^2 = 78.52, p < 0.001$). These findings demonstrate the efficacy of agent-driven frameworks in reducing biases and promoting fairness in AI systems. By addressing the nuanced challenges of LGBTQIA+ representation, this work aims to advance the development of socially responsible AI that respects and reflects the diversity of human identities, setting a precedent for future research in bias mitigation within language models.

Live content is unavailable. Log in and register to view live content