Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Red Teaming GenAI: What Can We Learn from Adversaries?

Steganography in Large Language Models: Investigating Emergence and Mitigations

Yohan Mathew · Robert McCarthy · Ollie Matthews · Joan Velja · Nandi Schoots · Dylan Cope

Keywords: [ In-Context Learning ] [ Steganography ] [ Reinforcement Learning ] [ Large Language Models ]


Abstract:

The rapid proliferation of frontier model agents promises significant societal advances but also raises concerns about systemic risks arising from unsafe interactions. Collusion to the disadvantage of others has been identified as a central form of undesirable agent cooperation. The use of information hiding (steganography) in agent communications could render collusion practically undetectable. This underscores the need for evaluation frameworks to monitor and mitigate steganographic collusion capabilities. We address a crucial gap in the literature by demonstrating, for the first time, that robust steganographic collusion in LLMs can arise indirectly from optimization pressure. To investigate this problem we design two approaches -- a gradient-based reinforcement learning (GBRL) method and an in-context reinforcement learning (ICRL) method -- for reliably eliciting sophisticated LLM-generated linguistic text steganography. Importantly, we find that emergent steganographic collusion can be robust to both passive steganalytic oversight of model outputs and active mitigation through communication paraphrasing. We contribute a novel model evaluation framework and discuss limitations and future work. Our findings imply that effective risk mitigation from steganographic collusion post-deployment requires innovation in passive and active oversight techniques.

Chat is not available.