Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: Algorithmic Fairness through the lens of Metrics and Evaluation

Beyond the Safety Bundle: Auditing the Helpful and Harmless Dataset

Khaoula Chehbouni · Jonathan Colaço Carr · Yash More · Jackie CK Cheung · Golnoosh Farnadi

Keywords: [ Audits ] [ Evaluation Metrics and Techniques ] [ NLP ] [ Trade-offs ]

[ ]
Sat 14 Dec 5:27 p.m. PST — 5:30 p.m. PST
 
presentation: Algorithmic Fairness through the lens of Metrics and Evaluation
Sat 14 Dec 9 a.m. PST — 5:30 p.m. PST

Abstract:

In an effort to mitigate the harms of large language models (LLMs), learning from human feedback (LHF) has been used to steer LLMs towards outputs that are intended to be both less harmful and more helpful. Despite the widespread adoption of LHF in practice, the quality of this feedback and its effectiveness as a safety mitigation technique remain unclear. This study addresses these issues by auditing the widely-used Helpful and Harmless (HH) dataset by Anthropic. Our work includes: (1) a thorough investigation of the dataset's content through both manual and automated evaluation, revealing significant distributional gaps, conceptualization failures and quality issues; (2) experiments demonstrating the dataset's impact on models' safety, showing disparate safety behaviors across demographic groups; and (3) an analysis of the 100 most influential papers citing this dataset. Our findings highlight the need for demographically-balanced preference datasets and a recontextualization of safety issues in LLM development.

Chat is not available.