Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Safe Generative AI

Dynamic Negative Guidance of Diffusion Models: Towards Immediate Content Removal

Felix Koulischer · Johannes Deleu · Gabriel Raya · Thomas Demeester · Luca Ambrogioni


Abstract:

The rise of highly realistic large scale generative diffusion models comes hand in hand wih public safety concerns. In addition to the risk of generating Not-Safe-For-Work content from models trained on large internet-scraped datasets, there is a serious concern about reproducing copyrighted material, including celebrity images and artistic styles. We introduce Dynamic Negative Guidance a theoretically grounded negative guidance scheme that can avoid the generation of unwanted content without drastically harming the diversity of the model. Our approach avoids some of the disadvantages of the widespread, yet theoretically unfounded, Negative Prompting algorithm. Our guidance scheme does not require retraining the conditional model and can therefore be applied as a temporary solution to meet customer requests until model fine-tuning is possible.

Chat is not available.