Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Foundation Model Interventions

Pay Attention to What Matters

Pedro Silva · Fadhel Ayed · Antonio De Domenico · Ali Maatouk

Keywords: [ Mechanistic Interventions ] [ Transformer Interpretability ] [ Large Language Models ] [ Alignment ]


Abstract:

Despite the remarkable success of Large Language Models (LLMs), they still exhibit a limited capability to align their outputs to the user instructions. In this work, we introduce a simple and effective method, which we name as GUIDE, that mechanistically increases attention scores in instruction tokens. To support this operation, we present Influence, a novel metric that highlights how the user's instructions propagate with transformer layers and impact the LLM output. Our results show that GUIDE improves the accuracy of following certain instructions 29.4% to 60.4 %, outperforming natural prompting alternatives.

Chat is not available.