Poster
BAN: Detecting Backdoors Activated by Neuron Noise
Xiaoyun Xu · Zhuoran Liu · Stefanos Koffas · Shujian Yu · Stjepan Picek
[
Abstract
]
Thu 12 Dec 11 a.m. PST
— 2 p.m. PST
Abstract:
Backdoor attacks on deep learning represent a recent threat that has gained significant attention in the research community. Backdoor defenses are mainly based on backdoor inversion, which has been shown to be generic, model-agnostic, and applicable to practical threat scenarios. State-of-the-art backdoor inversion recovers a mask in the feature space to locate prominent backdoor features, where benign and backdoor features can be disentangled. However, it suffers from high computational overhead, and we also find that it overly relies on prominent backdoor features that are highly distinguishable from benign features. To tackle these shortcomings, this paper improves backdoor feature inversion for backdoor detection by incorporating extra neuron activation information. In particular, we adversarially increase the loss of backdoored models with respect to weights to activate the backdoor effect, based on which we can easily differentiate backdoored and clean models. Experimental results demonstrate our defense, BAN, is 1.37$\times$ (on CIFAR-10) and 5.11$\times$ (on ImageNet200) more efficient with 9.99\% higher detect success rate than the state-of-the-art defense BTI-DBF~\cite{xu2024btidbf}. Our code and trained models are publicly available at this anonymous link (https://anonymous.4open.science/r/ban-4B32).
Live content is unavailable. Log in and register to view live content