Poster
Unelicitable Backdoors via Cryptographic Transformer Circuits
Andis Draguns · Andrew Gritsevskiy · Sumeet Motwani · Christian Schroeder de Witt
The rapid proliferation of open-source language models significantly increases the risks of downstream backdoor attacks. These backdoors can introduce dangerous behaviors during model deployment and can evade detection by conventional cybersecurity monitoring systems. In this paper, we introduce a novel class of backdoors in autoregressive transformer models, that, in contrast to prior art, are unelicitable in nature. Unelicitability prevents the defender from triggering the backdoor, making it impossible to evaluate or detect ahead of deployment if if given full white-box access and using automated techniques, such as red-teaming or formal verification methods. We show that our novel construction is not only unelicitable thanks to using cryptographic techniques, but also has favourable robustness properties.We confirm these properties in empirical investigations, and show evidence that our backdoors can withstand state-of-the-art mitigation strategies. Additionally, we expand on previous work by showing that our universal backdoors, while not completely undetectable in white-box settings, can be significantly harder to detect than existing approaches. By demonstrating the feasibility of seamlessly integrating backdoors into transformer models, this paper fundamentally questions the efficacy of pre-deployment detection strategies. This offers new insights into the offense-defense balance in AI safety and security.
Live content is unavailable. Log in and register to view live content