Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Socially Responsible Language Modelling Research (SoLaR)

Position: AI Agents & Liability – Mapping Insights from ML and HCI Research to Policy

Connor Dunlop · Weiwei Pan · Julia Smakman · Lisa Soder · Siddharth Swaroop

Keywords: [ AI and regulation ] [ AI governance ] [ AI and policy ] [ AI and law ] [ Liability ]


Abstract:

AI agents are loosely defined as systems capable of executing complex, open-ended tasks. Many have raised concerns that these systems will present significant challenges to regulatory/legal frameworks, particularly in tort liability. However, as there is no universally accepted definition of an AI agent, concrete analyses of these challenges are limited, especially as AI systems continue to grow in capabilities. In this paper, we argue that by focusing on properties of AI agents rather than the threshold at which an AI system becomes an agent, we can map existing technical research to explicit categories of “foreseeable harms” in tort liability, as well as point to “reasonable actions” that developers can take to mitigate harms.

Chat is not available.