Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Pluralistic Alignment Workshop

Mechanism Design for LLM Fine-tuning with Multiple Reward Models

Haoran Sun · Yurong Chen · Siwei Wang · Wei Chen · Xiaotie Deng


Abstract:

Recent research on fine-tuning large language models (LLMs) through the aggregation of multiple preferences has attracted considerable attention. However, the existing literature predominantly focuses on the empirical performance of aggregation algorithms while neglecting the underlying motivation for agents to misreport their preferences. In this paper, we formalize this as a multi-parameter mechanism design problem, where an LLM provider designs training and payment rules to achieve specific objectives and promote the truthful reporting of preferences. Firstly, we claim the necessity of a payment scheme by demonstrating that without payments, truth-telling is a strictly dominated strategy under a wide range of training rules. Then, we introduce the affine maximizer payment scheme for the social welfare maximizing training rules, which ensures both dominant-strategy incentive compatibility (DSIC) and individual rationality (IR). Furthermore, we prove that under mild conditions, any other payment rule that implements these training rules in DSIC can be converted to the affine maximizer payment by adding a factor irrelevant to the agents' reports. We also show that this mechanism satisfies approximate DSIC when the input of the mechanism is a biased version of the reported preferences, showcasing its robustness in real-world applications.

Chat is not available.