Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Language Gamification

Efficacy of Language Model Self-Play in Non-Zero-Sum Games

Austen Liao · Nicholas Tomlin · Dan Klein

[ ] [ Project Page ]
Sat 14 Dec 10 a.m. PST — 10:05 a.m. PST
 
presentation: Language Gamification
Sat 14 Dec 8:20 a.m. PST — 5:30 p.m. PST

Abstract:

Game-playing agents like AlphaGo have achieved superhuman performance through self-play, which is theoretically guaranteed to yield optimal policies in competitive games. However, most language tasks are partially or fully cooperative, so it is an open question whether techniques like self-play can effectively be used to improve language models. We empirically investigate this question in a negotiation game setting known as Deal or No Deal (DoND). Crucially, the objective in DoND can be modified to produce a fully cooperative game, a strictly competitive one, or anything in between. We finetune language models in self-play over multiple rounds of filtered behavior cloning in DoND for each of these objectives and evaluate them in self-play and in collaboration with humans. We find that language models improve substantially in self-play, achieving 14-17× higher scores in task reward after finetuning. Further, the trained models generalize to both cooperation and competition with humans, scoring 2.5-6× higher than base models. We view these results as an early promising sign for language model self-play in cooperative settings, despite a lack of theoretical guarantees.

Chat is not available.