Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Socially Responsible Language Modelling Research (SoLaR)

Honesty to Subterfuge: In-Context Reinforcement Learning Can Make Honest Models Reward Hack

Leo McKee-Reid · Joe Needham · Maria Martinez · Christoph Sträter · Mikita Balesni

Keywords: [ Machine+learning ] [ GPT-4o ] [ in-context+learning ] [ large+language+model ] [ LLM ] [ in-context+reinforcement+learning ] [ reward+hacking ] [ iterative+refinement ] [ Deception ] [ Specification+gaming ] [ GPT-4o-mini ]


Abstract:

Previous work has shown that training research-purpose “helpful-only” LLMs with reinforcement learning on a curriculum of gameable environments can lead models to generalize to egregious specification gaming, such as editing their own reward function or modifying task checklists to appear more successful. We show that gpt-4o and gpt-4o-mini, public models trained to be helpful, harmless and honest, can engage in specification gaming without training on a curriculum of tasks, purely from in-context iterative reflection (which we call in-context reinforcement learning, “ICRL”). We also show that, compared to the naive version of the expert iteration reinforcement learning algorithm, including ICRL in expert iteration increases gpt-4o-mini's propensity to learn specification-gaming policies, generalizing to the most egregious strategy where gpt-4o-mini edits its own reward function. Our results point toward the strong ability of in-context reflection to discover rare specification-gaming strategies that models might not exhibit zero-shot or with normal training.

Chat is not available.