Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 3rd Workshop on New Frontiers in Adversarial Machine Learning (AdvML-Frontiers)

Smoothing-Based Adversarial Defense Methods for Inverse Problems

Yang Sun · Jonathan Scarlett

Keywords: [ randomized smoothing ] [ Linear Inverse Problems ] [ Adversarial Defense ]


Abstract:

In this paper, we propose randomized smoothing methods that aim to enhance the robustness of the linear inverse problems against adversarial attacks, in particular guaranteeing an upper bound on a suitably-defined notion of sensitivity to perturbations. In addition, we propose two novel algorithms that incorporate randomized smoothing into training, where one algorithm injects random perturbations to the input data directly, and the other algorithm adds random perturbations to the gradients during backpropagation. We conduct numerical evaluations on two of the most prominent inverse problems --- denoising and compressed sensing --- utilizing a variety of neural network estimators and datasets. In broad scenarios, these results demonstrate a strong potential of randomized smoothing for enhancing the robustness of linear inverse problems.

Chat is not available.