Poster
in
Workshop: Workshop on robustness of zero/few-shot learning in foundation models (R0-FoMo)
Jailbreaking Black Box Large Language Models in Twenty Queries
Patrick Chao · Alexander Robey · Edgar Dobriban · Hamed Hassani · George J. Pappas · Eric Wong
There is growing research interest in ensuring that large language models align with human safety and ethical guidelines. Adversarial attacks known as 'jailbreaks' pose a significant threat as they coax models into overriding alignment safeguards. Identifying these vulnerabilities through attacking a language model (red teaming) is instrumental in understanding inherent weaknesses and preventing misuse. We present Prompt Automatic Iterative Refinement (PAIR), which generates semantic jailbreaks with only black-box access to a language model.Empirically, PAIR often requires fewer than 20 queries, orders of magnitude fewer than prior jailbreak attacks. PAIR draws inspiration from the human process of social engineering, and employs an attacker language model to automatically generate adversarial prompts in place of a human. The attacker model uses the target model's response as additional context to iteratively refine the adversarial prompt. PAIR achieves competitive jailbreaking success rates and transferability on open and closed-source language models, including GPT-3.5/4, Vicuna, and PaLM.