Poster
WaveAttack: Asymmetric Frequency Obfuscation-based Backdoor Attacks Against Deep Neural Networks
Jun Xia · Zhihao Yue · Yingbo Zhou · zhiwei ling · Yiyu Shi · Xian Wei · Mingsong Chen
Due to the increasing popularity of Artificial Intelligence (AI), more and more backdoor attacks are designed to mislead Deep Neural Network (DNN) predictions by manipulating training samples or training processes. Although backdoor attacks have been investigated in various real scenarios, they still suffer from the problems of both low fidelity of poisoned samples and non-negligible transfer in latent space, which make them easily identifiedby existing backdoor detection algorithms.To overcome this weakness, this paper proposes a novel frequency-based backdoor attack method named WaveAttack, which obtains high-frequency image features through Discrete Wavelet Transform (DWT) to generate highly stealthy backdoor triggers.By introducing an asymmetric frequency obfuscation method, our approach adds an adaptive residual in both the training and inference stages to improve the impact of triggers, thus further enhancing the effectiveness of WaveAttack.Comprehensive experimental results show that,WaveAttack can not only achieve higher effectiveness than state-of-the-art backdoor attack methods, but also outperform them in the fidelity of images (i.e., by up to 28.27\% improvement in PSNR, 1.61\% improvement in SSIM, and 70.59\% reduction in IS). Our code is available at \url{https://anonymous.4open.science/r/AnonymousRep-701D}.
Live content is unavailable. Log in and register to view live content