Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Regulatable ML: Towards Bridging the Gaps between Machine Learning Research and Regulations

IDs for AI Systems

Alan Chan · Noam Kolt · Peter Wills · Usman Anwar · Christian Schroeder de Witt · Nitarshan Rajkumar · Lewis Hammond · David Krueger · Lennart Heim · Markus Anderljung


Abstract:

AI systems are increasingly pervasive, yet information needed to decide whether and how to engage with them may not exist or be accessible. A user may not be able to verify whether a system has certain safety certifications. An investigator may not know whom to investigate when a system causes an incident. It may not be clear whom to contact to shut down a malfunctioning system. Across a number of domains, IDs address analogous problems by identifying \textit{particular} entities (e.g., a particular Boeing 747) and providing information about other entities of the same class (e.g., some or all Boeing 747s). We propose a framework in which IDs are ascribed to \textbf{instances} of AI systems (e.g., a particular chat session with Claude), and associated information is accessible to parties seeking to interact with that system. We characterize IDs, provide concrete use cases, analyze why and how certain actors could incentivize ID adoption, explore how deployers of AI systems could implement IDs, and highlight limitations and risks. IDs seem most warranted in settings where AI systems could have a large impact upon the world, such as in making financial transactions or contacting real humans. Limited experimentation with IDs, particularly from deployers of and actors who provide services to AI systems, seems justified. With further study, IDs could help with managing a world where AI systems pervade society.

Chat is not available.