Poster
Disentangled Style Domain for Implicit $z$-Watermark Towards Copyright Protection
Junqiang Huang · Zhaojun Guo · Ge Luo · Zhenxing Qian · Sheng Li · Xinpeng Zhang
[
Abstract
]
Wed 11 Dec 4:30 p.m. PST
— 7:30 p.m. PST
Abstract:
Text-to-image models have shown surprising performance in high-quality image generation, while also raising intensified concerns about unauthorized dataset copyright protection in training and personalized fine-tuning. Recent approaches, embedding watermarks, introducing perturbations, and inserting backdoors into datasets, are dependent on adding minor information limiting their ability to detect unauthorized data usage. In this paper, we introduce a novel implicit watermarking scheme that first utilizes the disentangled style domain to detect unauthorized dataset usage in text-to-image models. Our approach generates the implicit watermark from the disentangled style domain, which enables self-generalization and mutual exclusivity within the style domain anchored by protected units. The domain, maximally offset by identifier $z$ and negative samples, facilitates the structured delineation of dataset copyright boundaries for multiple sources of styles and contents in image generation. Additionally, we introduce the concept of watermark distribution to establish a verification mechanism for copyright ownership of hybrid or partial infringements, addressing deficiencies in the traditional mechanism of dataset copyright ownership for AI mimicry. Notably, our method achieved $\textbf{One-Sample-Verification}$ for dataset copyright verification in AI mimic generations.
Live content is unavailable. Log in and register to view live content