Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Red Teaming GenAI: What Can We Learn from Adversaries?

Diverse and Effective Red Teaming with Auto-generated Rewards and Multi-step Reinforcement Learning

Alex Beutel · Kai Xiao · Johannes Heidecke · Lilian Weng

Keywords: [ reinforcement learning ] [ red teaming ]


Abstract:

Automated red-teaming can discover rare model failures and generate challenging examples that can be used for training or evaluation. However, a core challenge in automated red-teaming is ensuring that the attacks are both diverse and effective. Prior methods typically succeed in optimizing either for diversity or for effectiveness, but rarely both. In this paper, we provide methods that enable automated red-teaming to generate a large number of diverse and successful attacks.Our approach is decomposed into two steps: (1) automated methods for generating diverse attack goals and (2) generating effective attacks for those goals. While we provide multiple straightforward methods for generating diverse goals, our key contributions are to train an RL attacker that both follows those goals and generates diverse attacks for those goals. First, we demonstrate that it is easy to use a large language model (LLM) to generate diverse attacker goals with per-goal prompts and rule-based rewards (RBRs), which can be used to grade whether the attacks are ultimately successful. Second, we demonstrate how training the attacker model with multi-step RL, where the model is rewarded for generating attacks that are different from past attempts, increases diversity with only a marginal decrease in effectiveness. We use our approach to generate both prompt injection attacks and prompts that elicit unsafe responses. In both cases, we find that our approach is able to generate highly-effective and considerably more diverse attacks than past general red-teaming approaches.

Chat is not available.