Poster
in
Workshop: Red Teaming GenAI: What Can We Learn from Adversaries?
Lessons From Red Teaming 100 Generative AI Products
Blake Bullwinkel · Amanda Minnich · Shiven Chawla · Gary Lopez Munoz · Martin Pouliot · Whitney Maxwell · Joris de Gruyter · Katherine Pratt · Saphir Qi · Nina Chikanov · Roman Lutz · Raja Sekhar Rao Dheekonda · Bolor-Erdene Jagdagdorj · Rich Lundeen · Sam Vaughan · Victoria Westerhoff · Pete Bryan · Ram Shankar Siva Kumar · Yonatan Zunger · Mark Russinovich
Keywords: [ AI red teaming ]
In recent years, AI red teaming has emerged as a practice for probing the safety and security of generative AI systems. Due to the nascency of the field, there is significant debate about how red teaming operations should be conducted. Based on our experience red teaming over 100 generative AI products at Microsoft, we present our internal threat model ontology and eight main lessons we have learned:1. Understand what the system can do and where it is applied2. You don’t have to compute gradients to break an AI system3. AI red teaming is not safety benchmarking4. Automation can help cover more of the risk landscape5. The human element of AI red teaming is crucial6. Responsible AI harms are pervasive but difficult to measure7. LLMs amplify existing security risks and introduce new ones8. AI safety and security will never be "solved"By sharing these qualitative insights alongside examples from our operations, we offer practical recommendations aimed at aligning red teaming efforts with real world risks. We also highlight aspects of AI red teaming that are often misunderstood and discuss open questions for the field to consider.