Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Safe Generative AI

Privacy Protection in Personalized Diffusion Models via Targeted Cross-Attention Adversarial Attack

Xide Xu · Muhammad Atif Butt · Sandesh Kamath · Bogdan Raducanu


Abstract:

The growing demand for customized visual content has led to the rise of personalized text-to-image (T2I) diffusion models. Despite their remarkable potential, they pose significant privacy risk when misused for malicious purposes. In this paper, we propose a novel and efficient adversarial attack method, Concept Protection by Selective Attention Manipulation (CoPSAM) which targets only the cross-attention layers of a T2I diffusion model. For this purpose, we carefully construct an imperceptible noise to be added to clean samples to get their adversarial counterparts. This is obtained during the fine-tuning process by maximizing the discrepancy between the corresponding cross-attention maps of the user-specific token and the class-specific token, respectively. Experimental validation on a subset of CelebA-HQ face images dataset demonstrates that our approach outperforms existing methods. Besides this, our method presents two important advantages derived from the qualitative evaluation: (i) we obtain better protection results for lower noise levels than our competitors, and (ii) we protect the content from unauthorized use thereby protecting the individual's identity from potential misuse.

Chat is not available.