Skip to yearly menu bar Skip to main content


Poster Session
in
Workshop: Scientific Methods for Understanding Neural Networks

SolidMark: How to Evaluate Memorization in Image Generative Models

Nicky Kriplani · Minh Pham · Malikka Rajshahi · Chinmay Hegde · Niv Cohen

[ ] [ Project Page ]
Sun 15 Dec 4:30 p.m. PST — 5:30 p.m. PST

Abstract:

Diffusion models such as Stable Diffusion, DALL-E 2, and Imagen have garnered significant attention for their ability to generate high-quality synthetic images from their training distribution. However, recent works have shown that diffusion models can memorize training images and emit them at generation time. Although this behavior has been extensively studied, some of the metrics used for evaluation suffer from different biases.We introduce SolidMark, a novel metric that provides a well-defined notion of pixel-level memorization. Our metric injects patterns (keys) into training images and aims to retrieve them at generation time via inpainting. We use our metric to evaluate existing memorization mitigation techniques. With our findings, we propose our metric as an intuitive lower bound for the amount of pixel-level memorization in a model.

Chat is not available.