Poster
in
Workshop: Statistical Frontiers in LLMs and Foundation Models
Towards Optimal Statistical Watermarking
Baihe Huang · Hanlin Zhu · Banghua Zhu · Kannan Ramchandran · Michael Jordan · Jason Lee · Jiantao Jiao
Keywords: [ LLM ] [ Hypothesis Testing ] [ Watermarking ]
Abstract:
We study statistical watermarking by formulating it as a hypothesis testing problem, a general framework which subsumes all previous statistical watermarking methods. Key to our formulation is a coupling of the output tokens and the rejection region, realized by pseudo-random generators in practice, that allows non-trivial trade-offs between the Type I error and Type II error. We characterize the Uniformly Most Powerful (UMP) watermark in the general hypothesis testing setting and the minimax Type II error in the model-agnostic setting. In the common scenario where the output is a sequence of $n$ tokens, we establish nearly matching upper and lower bounds on the number of i.i.d. tokens required to guarantee small Type I and Type II errors. Our rate of $\Theta(h^{-1} \log (1/h))$ with respect to the average entropy per token $h$ highlights a fundamental gap between the rate of $O(h^{-2})$ in the previous works. Moreover, we formulate the robust watermarking problem where the user is allowed to perform a class of perturbation on the generated texts, and characterize the optimal Type II error of robust UMP tests via a linear programming problem. To the best of our knowledge, this is the first systematic statistical treatment on the watermarking problem with near-optimal rates in the i.i.d. setting, and might be of interest for future works.
Chat is not available.