Skip to yearly menu bar Skip to main content


Poster

Decoding-Time Language Model Alignment with Multiple Objectives

Ruizhe Shi · Yifang Chen · Yushi Hu · Alisa Liu · Hannaneh Hajishirzi · Noah Smith · Simon Du

[ ]
Wed 11 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract: Aligning language models (LMs) to human preferences has emerged as a critical pursuit, enabling these models to better serve diverse user needs. Existing methods primarily focus on optimizing LMs for a single reward function, limiting their adaptability to varied objectives. Here, we propose $\textbf{multi-objective decoding (MOD)}$, a decoding-time algorithm that combines a set of base models, for any given weightings over different objectives. We exploit a common form among a family of $f$-divergence regularized alignment approaches (such as PPO, DPO, and their variants) to identify a closed-form solution by Legendre transform and derive an efficient decoding strategy to greedily output the next token from predicted probabilities of all base models. Theoretically, we show why existing approaches can be highly sub-optimal even in natural settings and obtain optimality guarantees for our method. Empirical results demonstrate the effectiveness of the algorithm. For example, compared to a parameter-merging baseline, MOD achieves 12.8\% overall reward improvement when equally optimizing towards $3$ objectives. Moreover, we experiment with MOD on combining three well-tuned LLMs of different model sizes, each aimed at different objectives such as safety, coding, and general user preference. Unlike traditional methods that require careful curation of a mixture of datasets to achieve comprehensive improvement, we can quickly experiment with preference weightings using MOD to find the best combination of models. Our best combination reduces Toxigen to nearly 0 and achieves 7.9--33.3\% improvement across other three metrics ($\textit{i.e.}$, Codex@1, GSM-COT, BBH-COT).

Live content is unavailable. Log in and register to view live content