Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 3rd Workshop on New Frontiers in Adversarial Machine Learning (AdvML-Frontiers)

Logicbreaks: A Framework for Understanding Subversion of Rule-based Inference

Anton Xue · Avishree Khare · Rajeev Alur · Surbhi Goel · Eric Wong

Keywords: [ Reasoning ] [ jailbreak ] [ language model ] [ Inference ] [ logic ]


Abstract: We study how to subvert language models from following the rules.We model rule-following as inference in propositional Horn logic, a mathematical system in which rules have the form "if $P$ and $Q$, then $R$'' for some propositions $P$, $Q$, and $R$.We prove that although transformers can faithfully abide by such rules, maliciously crafted prompts can nevertheless mislead even theoretically constructed models.Empirically, we find that attacks on our theoretical models mirror popular attacks on large language models.Our work suggests that studying smaller theoretical models can help understand the behavior of large language models in rule-based settings like logical reasoning and jailbreak attacks.

Chat is not available.