Poster
in
Workshop: Algorithmic Fairness through the Lens of Time
What Comes After Auditing: Distinguishing Between Algorithmic Errors and Task Specification Issues
Charvi Rastogi
Abstract:
General-purpose generative AI models (GMs) have demonstrated remarkable capabilities, but they have also exhibited instances of inappropriate or harmful behavior, often stemming from the inherent subjectivity of the tasks they undertake. While auditing and benchmarking work provide a vital starting point in understanding the harms perpetuated by GMs, the proposed solutions for updating GMs often reveal a disconnect with the nuance of task subjectivity. Consequently, we argue for the importance of distinguishing between task specification issues and algorithmic error issues both conceptually and methodologically in handling them, to comprehensively mitigate algorithm harm.
Chat is not available.