Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Algorithmic Fairness through the lens of Metrics and Evaluation

Measuring Representational Harms in Image Generation with a Multi-Group Proportional Metric

Sangwon Jung · Claudio Mayrink Verdun · Alex Oesterling · Sajani Vithana · Taesup Moon · Flavio du Pin Calmon

Keywords: [ Evaluation Metrics and Techniques ] [ Novel fairness metrics ] [ Fairness Metrics ]

[ ]
[ Poster
 
presentation: Algorithmic Fairness through the lens of Metrics and Evaluation
Sat 14 Dec 9 a.m. PST — 5:30 p.m. PST

Abstract:

Recent text-to-image generative models have captivated both the tech industry and the general public with their ability to create vivid, realistic images from textual descriptions. As these models proliferate, they also expose new concerns about their ability to represent diverse demographic groups, propagate stereotypes, and efface minority populations. Despite growing attention to the safe'' andresponsible'' design of artificial intelligence (AI), there is no established methodology to systematically measure and control representational harms in large image generation models. This paper introduces a novel framework to measure representation statistics of multiple intersectional population groups in images generated by text-to-image AI models. We develop the concept of Multi-Group Proportional Representation (MPR) in image generation. MPR measures the worst-case deviation of representation statistics across a number of population groups (e.g., the over- or under-representation of groups defined by race, gender, age, or any other prespecified group and their intersections) in images produced by a generative model. Our framework allows for flexible and context-specific representation evaluation, where target representation statistics are defined by a user. Through experiments, we demonstrate MPR can effectively measure representation statistics across multiple intersectional groups. This work provides a systematic approach to quantify representational biases in text-to-image models, offering a foundation for developing more inclusive generative AI systems.

Chat is not available.