Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Foundation Model Interventions

Steering Large Language Models using Conceptors: Improving Addition-Based Activation Engineering

Joris Postmus · Steven Abreu

Keywords: [ large language models ] [ model steering ] [ activation engineering ] [ mechanistic interventions ] [ activation addition ] [ function vectors ]


Abstract:

Large language models (LLMs) have transformed AI, but controlling their outputs reliably remains challenging. This paper explores activation engineering, where outputs of pre-trained LLMs are controlled by manipulating their activations at inference time. Unlike traditional methods using a single steering vector, we introduce conceptors--mathematical constructs that represent sets of activation vectors as ellipsoidal regions. Conceptors act as soft projection matrices and offer more precise control over complex activation patterns. Our experiments demonstrate that conceptors outperform traditional methods across multiple steering tasks including in-context learning tasks and toxicity removal. We further use a Boolean algebra over conceptors that allows for combined steering goals using Boolean logic, which empirically outperforms combining steering vectors on a set of tasks. These results highlight conceptors as a promising tool for more effective steering of LLMs.

Chat is not available.