Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 3rd Workshop on New Frontiers in Adversarial Machine Learning (AdvML-Frontiers)

RenderAttack: Hundreds of Adversarial Attacks Through Differentiable Texture Generation

Dron Hazra · Alex Bie · Mantas Mazeika · Xuwang Yin · Andy Zou · Dan Hendrycks · Max Kaufmann

Keywords: [ differentiable rendering ] [ unforeseen adversaries ] [ textures ] [ Adversarial Attacks ] [ benchmark ]


Abstract: A longstanding problem in adversarial robustness has been defending against attacks beyond standard $\ell_p$ threat models. However, the space of possible non-$\ell_p$ attacks is vast, and existing work has only developed a small number of attacks, due to the manual effort required to design and implement each individual attack. Building on recent progress in differentiable material rendering, we propose RenderAttack, a scalable framework for developing large numbers of structurally diverse, non-$\ell_p$ adversarial attacks. RenderAttack leverages vast, existing repositories of hand-designed image perturbations in the form of _procedural texture generation graphs_, converting them to differentiable transformations amenable to gradient-based optimization. In this work, we curate 160 new attacks and introduce the $\mathsf{ImageNet{\text -}RA}$ benchmark. In experiments, we find that $\mathsf{ImageNet{\text -}RA}$ poses a challenge for existing robust models and exposes new regions of attack-space. By comparing state-of-the-art models and defenses, we identify promising directions for future work in ensuring robustness to a wide range of test-time adversaries.

Chat is not available.