Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Statistical Frontiers in LLMs and Foundation Models

Source Attribution for Large Language Model-Generated Data

Xinyang Lu · Jingtan Wang · Zitong Zhao · Zhongxiang Dai · Chuan Sheng Foo · See-Kiong Ng · Bryan Kian Hsiang Low

Keywords: [ Large Language Model ] [ Source Attribution ]

[ ] [ Project Page ]
Sat 14 Dec 3:45 p.m. PST — 4:30 p.m. PST

Abstract:

The impressive performances of Large Language Models (LLMs) and their immense potential for commercialization have given rise to serious concerns over the Intellectual Property (IP) of their training data. In particular, the synthetic texts generated by LLMs may infringe the IP of the data being used to train the LLMs. To this end, it is imperative to be able to perform source attribution by identifying the data provider who contributed to the generation of a synthetic text by an LLM. In this paper, we show that this problem can be solved by watermarking, i.e., by enabling an LLM to generate synthetic texts with embedded watermarks that contain information about their source(s). We identify the key properties of such watermarking frameworks (e.g., source attribution accuracy, robustness against adversaries), and propose a source attribution framework that satisfies these key properties due to our algorithmic designs. Our framework enables an LLM to learn an accurate mapping from the generated texts to data providers, which sets the foundation for effective source attribution. Extensive empirical evaluations show that our framework achieves effective source attribution.

Chat is not available.