Poster
in
Workshop: Socially Responsible Language Modelling Research (SoLaR)
Monitoring Human Dependence On AI Systems With Reliance Drills
Rosco Hunter · Richard Moulange · Jamie Bernardi · Merlin Stein
Keywords: [ Automation Bias ] [ Risk Management ] [ Over-Reliance ] [ Human-Computer Interaction ] [ Systemic Risk ]
AI systems are assisting humans with an increasingly broad range of intellectual tasks. Humans could be over-reliant on this assistance if they trust AI-generated advice, even though they would make a better decision on their own. To identify real-world instances of over-reliance, this paper proposes the reliance drill: an exercise that tests whether a human can recognise mistakes in AI-generated advice. We introduce a pipeline that organisations could use to implement these drills. As an example, we explain how this approach could be used to limit over-reliance on AI in a medical setting. We conclude by arguing that reliance drills could become a key tool for ensuring humans remain appropriately involved in AI-assisted decisions.