Tutorial
Adversarial Robustness: Theory and Practice
J. Zico Kolter · Aleksander Madry
Room 220 CD
The recent push to adopt machine learning solutions in real-world settings gives rise to a major challenge: can we develop ML solutions that, instead of merely working “most of the time”, are truly reliable and robust? This tutorial will survey some of the key challenges in this context and then focus on the topic of adversarial robustness: the widespread vulnerability of state-of-the-art deep learning models to adversarial misclassification (aka adversarial examples). We will discuss the practical as well as theoretical aspects of this phenomenon, with an emphasis on recent verification-based approaches to establishing formal robustness guarantees. Our treatment will go beyond viewing adversarial robustness solely as a security question. In particular, we will touch on the role it plays as a regularizer and its relation to generalization.