Poster
in
Workshop: Socially Responsible Language Modelling Research (SoLaR)
Post-Deployment Regulatory Oversight for General-Purpose Large Language Models
Carson Ezell · Abraham Loeb
The development and deployment of increasingly capable, general-purpose large language models (LLMs) has led to a wide array of risks and harms from automation that are correlated across sectors and use cases. Effective regulation and oversight of general-purpose AI (GPAI) requires the ability to monitor, investigate, and respond to risks and harms that appear across use cases, as well as hold upstream developers accountable for downstream harms that result from their decisions and practices. We argue that existing processes for sector-specific AI oversight in the U.S. should be complemented by post-deployment oversight to address risks and harms specifically from GPAI usage, which may require a new AI-focused agency. We examine oversight processes implemented by other federal agencies as precedents for the GPAI oversight activities that an AI agency can conduct. The post-deployment oversight function of an AI agency can complement other regulatory functions that it may perform which are discussed elsewhere in the literature, including pre-deployment licensing or model evaluations for LLMs.