Poster
in
Workshop: Backdoors in Deep Learning: The Good, the Bad, and the Ugly
BadFusion: 2D-Oriented Backdoor Attacks against 3D Object Detection
Saket Sanjeev Chaturvedi · Lan Zhang · Wenbin Zhang · Pan He · Xiaoyong Yuan
3D object detection plays an important role in autonomous driving; however, its vulnerability to backdoor attacks has become evident. By injecting ''triggers'' to poison the training dataset, backdoor attacks manipulate the detector's prediction for inputs containing these triggers. Existing backdoor attacks against 3D object detection primarily poison 3D LiDAR signals, where large-sized 3D triggers are injected to ensure their visibility within the sparse 3D space, rendering them easy to detect and impractical in real-world scenarios. In this paper, we delve into the robustness of 3D object detection, exploring a new backdoor attack surface through 2D cameras. Given the prevalent adoption of camera and LiDAR signal fusion for high-fidelity 3D perception, we investigate the latent potential of camera signals to disrupt the process. Although the dense nature of camera signals enables the use of nearly imperceptible small-sized triggers to mislead 2D object detection, realizing 2D-oriented backdoor attacks against 3D object detection is non-trivial. The primary challenge emerges from the fusion process that transforms camera signals into a 3D space, thereby compromising the association with the 2D trigger to the target output. To tackle this issue, we propose an innovative 2D-oriented backdoor attack against LiDAR-camera fusion methods for 3D object detection, named BadFusion, aiming to uphold trigger effectiveness throughout the entire fusion process. Extensive experiments validate the effectiveness of BadFusion, achieving a significantly higher attack success rate compared to existing 2D-oriented attacks.