Spotlight
in
Workshop: Algorithmic Fairness through the lens of Metrics and Evaluation
Improving Bias Metrics in Vision-Language Models by Addressing Inherent Model Disabilities
Lakshmipathi Balaji Darur · Shanmukha Sai Keerthi Gouravarapu · Shashwat Goel · Ponnurangam Kumaraguru
Keywords: [ Computer Vision ] [ Bias Detection ] [ Evaluation Methods and Techniques ] [ General Fairness ]
Sat 14 Dec 9 a.m. PST — 5:30 p.m. PST
The integration of Vision-Language Models (VLMs) into various applications has highlighted the importance of evaluating these models for inherent biases, especially along gender and racial lines. Traditional bias assessment methods in VLMs typically rely on accuracy metrics, assessing disparities in performance across different demographic groups. These methods, however, often overlook the impact of the model's disabilities, like lack spatial reasoning, which may skew the bias assessment. In this work, we propose an approach that systematically examines how current bias evaluation metrics account for the model's limitations. We introduce two methods that circumvent these disabilities by integrating spatial guidance from textual and visual modalities. Our experiments aim to refine bias quantification by effectively mitigating the impact of spatial reasoning limitations, offering a more accurate assessment of biases in VLMs.