Spotlight
in
Workshop: Algorithmic Fairness through the Lens of Time
Transparency Through the Lens of Recourse and Manipulation
Yatong Chen · Andrew Estornell · Yevgeniy Vorobeychik · Yang Liu
Individuals often seek to reverse undesired outcomes in interactions with automated systems, such as loan denials, by modifying their features. These reversions can occur through either system-recommended actions, known as ``recourse'', or through manipulation actions such as misreporting feature values. Providing recourse can benefit users by enabling feature improvements (e.g., improving creditworthiness by paying off debt) and enhance the system's own utility (e.g., by creating more credit worthy individuals to whom the system can lend) However, providing recourse also increases the transparency of the decision rule and thus introduces opportunities for strategic individuals to better exploit the system; this is particularly true when groups of agents share information (e.g., sharing graduate school admission information on websites such as GradCafe). This natural tension will ultimately decide whether or not the system elects to provide recourse, this differs from current literature, which presumes the system's willingness to provide recourse without investigating the rationality of such readiness. To address this gap, we propose a framework through which the interplay of transparency, recourse, and manipulation can be investigated. Within this framework, we demonstrate that a rational system is frequently incentivized to provide only a small fraction of agents with recourse actions. We capture the social-cost of the system's hesitance to provide recourse and demonstrate that rotational behavior of the system results in a systemic decrease to population's total utility. Further, we find that this utility decrease can fall disproportional on sensitive groups within the population (such as those defined by race of gender).