Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Responsibly Building Next Generation of Multimodal Foundation Models

Attention Shift: Steering AI Away from Unsafe Content

Shivank Garg · Manyana Tiwari

Keywords: [ SafeGenerativeAI ] [ Unlearning ] [ AI Ethics ] [ Diffusion Models ]


Abstract:

This study analyses the generation of unsafe or harmful content in state-of-the-art generative models with a focus on techniques used for restricting such generations. We introduce a training-free approach using attention reweighing to remove unsafe concepts without additional training during inference. We compare the performance of models post the application of ablation techniques on both, direct as well as jailbreak prompt attacks, hypothesize potential reasons for the observed results, and discuss the limitations and broader implications of the approaches.

Chat is not available.