Skip to yearly menu bar Skip to main content


Poster

Imitating Language via Scalable Inverse Reinforcement Learning

Markus Wulfmeier · Michael Bloesch · Nino Vieillard · Arun Ahuja · Jorg Bornschein · Sandy Huang · Artem Sokolov · Matt Barnes · Guillaume Desjardins · Alex Bewley · Sarah Bechtle · Jost Tobias Springenberg · Nikola Momchev · Olivier Bachem · Matthieu Geist · Martin Riedmiller

[ ]
Thu 12 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

A majority of language model training builds on imitation learning. It covers pretraining, supervised fine-tuning, and affects the starting conditions for reinforcement learning from human feedback (RLHF). The simplicity and scalability of maximum likelihood estimation (MLE) for next token prediction led to it becoming the predominant paradigm for pre-training and initial fine-tuning. However, the broader field of imitation learning has led to algorithms with the potential to better utilize the sequential structure underlying autoregressive generation. Here, we investigate the inverse reinforcement learning (IRL) perspective to imitation, extracting rewards and directly optimizing sequences instead of individual token likelihoods and evaluate its benefits for fine-tuning large language models. We provide a new perspective, reformulating inverse soft-Q-learning as a temporal difference regularized extension of MLE. This enables a principled bridge between MLE and IRL and allows for experiments directly comparing both task performance and diversity of generations in the supervised fine-tuning setting. We find clear benefits for IRL-based imitation, in particular for retaining diverse responses while maximizing task performance. This renders it a competitive alternative for MLE even without additional data generation such as in standard fine-tuning on fixed datasets.

Live content is unavailable. Log in and register to view live content