Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Regulatable ML: Towards Bridging the Gaps between Machine Learning Research and Regulations

AI-Generated Content and Public Persuasion: The Limited Effect of AI Authorship Labels

Isabel Gallegos · Chen Shani · Weiyan Shi · Federico Bianchi · Robb Willer · Dan Jurafsky


Abstract:

As the growing capabilities of generative AI models have enabled information creation and dissemination at increased scale and speed, labeling AI-generated content has emerged as a policy proposal to increase transparency to users and reduce risks of misinformation and deception. While prior work has investigated the persuasiveness of AI-generated political messaging without notifying people of the information author, it is still unclear whether the authorship of information affects its persuasiveness. It is critical to understand this impact, given the proposed policy regarding labeling AI-generated content. In this study, we conduct a survey of U.S. respondents to investigate the persuasiveness of information labeled as AI-generated, human-written, or unlabeled across four policy issues. We find that the disclosure of AI authorship does not significantly affect persuasiveness. Further, this relationship holds even when controlling for respondents' prior knowledge about the policy, political party, education level, age, and prior experience with AI tools. These results suggest that labeling content as AI-generated may have only a weak effect and may minimally diminish its persuasive impact. Thus, this calls for other solutions to address the pressing issue.

Chat is not available.