Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Language Gamification

Improving Branching Language via Self-Reflection

Kolby T Nottingham · Ruo-Ping Dong · Ben Kasper · Wesley Kerr


Abstract:

While most language is formatted linearly, applications such as planning, trees of thought, and branching narrative are represented in a tree structure. Generating branching outputs from a language model (LM) is trivial, but representing trees of text in a one dimensional input is problematic. This makes popular self-reflection methods of improvement prohibitive for branching language. In this work, we address this limitation by proposing a new method for improving trees of branching language. Our method iterates between reflecting on sampled paths through a tree and resampling problematic subtrees. We evaluate our method on a branching narrative task with the objective of improving every path through the tree. Our method creates narrative that is preferred 60% more than unmodified narrative trees by an LM judge. Our method also scales to tree depths that cause naive methods of self-reflection to fail.

Chat is not available.