Poster
On Human-Aligned Risk Minimization
Liu Leqi · Adarsh Prasad · Pradeep Ravikumar
East Exhibition Hall B, C #86
Keywords: [ Fairness, Accountability, and Transparency ] [ Applications ] [ Computational Social Science ]
The statistical decision theoretic foundations of modern machine learning have largely focused on the minimization of the expectation of some loss function for a given task. However, seminal results in behavioral economics have shown that human decision-making is based on different risk measures than the expectation of any given loss function. In this paper, we pose the following simple question: in contrast to minimizing expected loss, could we minimize a better human-aligned risk measure? While this might not seem natural at first glance, we analyze the properties of such a revised risk measure, and surprisingly show that it might also better align with additional desiderata like fairness that have attracted considerable recent attention. We focus in particular on a class of human-aligned risk measures inspired by cumulative prospect theory. We empirically study these risk measures, and demonstrate their improved performance on desiderata such as fairness, in contrast to the traditional workhorse of expected loss minimization.
Live content is unavailable. Log in and register to view live content