Spotlight Poster
Adaptive Randomized Smoothing: Certifying Multi-Step Defences against Adversarial Examples
Saiyue Lyu · Shadab Shaikh · Frederick Shpilevskiy · Evan Shelhamer · Mathias Lécuyer
[
Abstract
]
Wed 11 Dec 11 a.m. PST
— 2 p.m. PST
Abstract:
We propose Adaptive Randomized Smoothing (ARS) to certify the predictions of our test-time adaptive models against adversarial examples.ARS extends the analysis of randomized smoothing using f-Differential Privacy to certify the adaptive composition of multiple steps.For the first time, our theory covers the sound adaptive composition of general and high-dimensional functions of noisy input.We instantiate ARS on deep image classification to certify predictions against adversarial examples of bounded $L_{\infty}$ norm.In the $L_{\infty}$ threat model, our flexibility enables adaptation through high-dimensional input-dependent masking.We design adaptivity benchmarks, based on CIFAR-10 and CelebA, and show that ARS improves certified accuracy by up to 10% points. On ImageNet, ARS improves certified accuracy by 1% point over standard RS without adaptivity.
Live content is unavailable. Log in and register to view live content