Poster
in
Workshop: Algorithmic Fairness through the Lens of Time
Assessing Perceived Fairness in Machine Learning (ML) Process: A Conceptual Framework
Anoop Mishra · Deepak Khazanchi
In ML applications, “unfairness” can be caused by bias in the data, curation process, erroneous assumptions, and implicit bias rendered within the algorithmic development process. As ML applications come into broader use, developing fair ML applications is critical. Assessing fairness and developing fair ML applications has become important in the era of Responsible AI in practice in research, industry, and academia. However, a literature survey suggests that fairness in ML is very subjective, and there is no coherent way to describe the fairness of AI/ML processes and applications. To better understand the perception of fairness in the ML process, we conducted virtual focus groups with developers, reviewed prior literature, and integrated notions of justice theory to propose that perceived fairness is a multidimensional concept. In this paper, we will explore the initial outcomes of this effort.