Poster
in
Workshop: Red Teaming GenAI: What Can We Learn from Adversaries?
Decoding Biases: An Analysis of Automated Methods and Metrics for Gender Bias Detection in Language Models
Shachi H. Kumar · Saurav Sahay · Sahisnu Mazumder · Eda Okur · Ramesh Manuvinakurike · Nicole Beckage · Hsuan Su · Hung-yi Lee · Lama Nachman
Keywords: [ Adversarial prompting ] [ Bias Detection ] [ LLM as a judge ]
Large Language Models (LLMs) have excelled at language understanding and generating human-level text. However, even with supervised training and human alignment, these LLMs are susceptible to adversarial attacks where malicious users can prompt the model to generate undesirable text. LLMs also inherently encode potential biases that can cause various harmful effects during interactions. Bias evaluation metrics lack standards as well as consensus and existing methods often rely on human-generated templates and annotations which are expensive and labor intensive. In this work, we train models to automatically create adversarial prompts to elicit biased responses from target LLMs. We present LLM-based bias detection metrics, i.e., LLM-as-a-judge, and also analyze several existing automatic evaluation methods and metrics. We analyze the various nuances of model responses, identify the strengths and weaknesses of model families, and assess where evaluation methods fall short. We compare these metrics to human evaluation and show that the LLM-as-a-Judge metric aligns with human judgement on bias detection in response generation.