Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 3rd Workshop on New Frontiers in Adversarial Machine Learning (AdvML-Frontiers)

Rethinking Randomized Smoothing from the Perspective of Scalability

Sukrit Jindal · Devansh Bhardwaj · Anupriya Kumari

Keywords: [ Deep Learning ] [ randomized smoothing ] [ certification-based defense ] [ Adversarial Attacks ] [ Scalability ]


Abstract:

Machine learning models have demonstrated remarkable success across diverse domains but remain vulnerable to adversarial attacks. Empirical defense mechanisms often fail, as new attacks constantly emerge, rendering existing defenses obsolete, shifting the focus to certification-based defenses. Randomized smoothing has emerged as a promising technique among notable advancements. This study reviews the theoretical foundations and empirical effectiveness of randomized smoothing and its derivatives in verifying machine learning classifiers from a perspective of scalability. We provide an in-depth exploration of the fundamental concepts underlying randomized smoothing, highlighting its theoretical guarantees in certifying robustness against adversarial perturbations and discuss the challenges of existing methodologies.

Chat is not available.