Skip to yearly menu bar Skip to main content


Poster

Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples

Sungyoon Lee · Woojin Lee · Jinseong Park · Jaewook Lee

Virtual

Keywords: [ Robustness ] [ Optimization ] [ Deep Learning ] [ Adversarial Robustness and Security ]


Abstract:

We study the problem of training certifiably robust models against adversarial examples. Certifiable training minimizes an upper bound on the worst-case loss over the allowed perturbation, and thus the tightness of the upper bound is an important factor in building certifiably robust models. However, many studies have shown that Interval Bound Propagation (IBP) training uses much looser bounds but outperforms other models that use tighter bounds. We identify another key factor that influences the performance of certifiable training: \textit{smoothness of the loss landscape}. We find significant differences in the loss landscapes across many linear relaxation-based methods, and that the current state-of-the-arts method often has a landscape with favorable optimization properties. Moreover, to test the claim, we design a new certifiable training method with the desired properties. With the tightness and the smoothness, the proposed method achieves a decent performance under a wide range of perturbations, while others with only one of the two factors can perform well only for a specific range of perturbations. Our code is available at \url{https://github.com/sungyoon-lee/LossLandscapeMatters}.

Chat is not available.