Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Safe Generative AI

LLM Targeted Underperformance Disproportionately Impacts Vulnerable Users

Elinor Poole-Dayan · Deb Roy · Jad Kabbara


Abstract:

While state-of-the-art Large Language Models (LLMs) have shown impressive performance on many tasks, systematically evaluating undesirable behaviors of generative models remains critical. In this work, we visit the notion of ethics and bias in terms of how model behavior changes depending on three user traits: English proficiency, education level, and country of origin. We evaluate how fairly LLMs respond to different users in terms of information accuracy, truthfulness, and refusals. We present extensive experimentation on three state-of-the-art LLMs and two different datasets targeting truthfulness and factuality. Our findings suggest that undesirable behaviors occur disproportionately more for users with lower English proficiency, of lower education status, and originating from outside the US, rendering these models unreliable sources of information towards their most vulnerable users.

Chat is not available.