Poster
in
Workshop: Safe Generative AI
Detecting Origin Attribution for Text-to-Image Diffusion Models in RGB and Beyond
Katherine Xu · Lingzhi Zhang · Jianbo Shi
Modern text-to-image (T2I) diffusion models can generate images with remarkable realism and creativity. These advancements have sparked research in fake image detection and attribution, yet prior studies have not fully explored the practical and scientific dimensions of this task. In this work, we not only attribute images to 12 state-of-the-art T2I generators but also investigate what inference stage hyperparameters are discernible. We further examine what visual traces are leveraged in origin attribution by perturbing high-frequency details and employing mid-level representations of image style and structure. Notably, altering high-frequency information causes only slight reductions in accuracy, and training an attributor on style representations outperforms training on RGB images. Our analyses underscore that fake images are detectable and attributable at various levels of visual granularity.