Poster
in
Workshop: Workshop on Machine Learning Safety
Best of Both Worlds: Towards Adversarial Robustness with Transduction and Rejection
Nils Palumbo · Yang Guo · Xi Wu · Jiefeng Chen · Yingyu Liang · Somesh Jha
Abstract:
Both transduction and rejection have emerged as key techniques to enable stronger defenses against adversarial perturbations, but existing work has not investigated the combination of transduction and rejection. Our theoretical analysis shows that combining the two can potentially lead to better guarantees than using transduction or rejection alone. Based on the analysis, we propose a defense algorithm that learns a transductive classifier with the rejection option and also propose a strong adaptive attack for evaluating our defense. The experimental results on MNIST and CIFAR-10 show that it has strong robustness, outperforming existing baselines, including those using only transduction or rejection.
Chat is not available.