Poster
in
Workshop: Workshop on Responsibly Building Next Generation of Multimodal Foundation Models
Exploring Intrinsic Fairness in Stable Diffusion
Eunji Kim · Siwon Kim · Robin Rombach · Rahim Entezari · Sungroh Yoon
Keywords: [ debias ] [ fairness ] [ destereotype ] [ ext-to-image ]
Recent text-to-image models like Stable Diffusion produce photo-realistic images but often exhibit demographic biases. Previous debiasing efforts have predominantly focused on introducing training-based debiasing approaches, neglecting to investigate the root causes of these biases and overlooking Stable Diffusion's potential for generating unbiased images. In this paper, we demonstrate that Stable Diffusion inherently possesses fairness, which can be unlocked to achieve debiased outputs. We conduct carefully designed experiments to analyze the effect of initial noise sampling and text guidance on biased image generation. Our analysis reveals that an excessive correlation between text prompts and the diffusion process is a key source of bias.