Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Adaptive Foundation Models: Evolving AI for Personalized and Efficient Learning

Personalized Language Modeling from Personalized Human Feedback

Xinyu Li · Ruiyang Zhou · Zachary Lipton · Liu Leqi


Abstract:

Personalized large language models (LLMs) are designed to tailor responses to individual user preferences. While Reinforcement Learning from Human Feedback (RLHF) is a commonly used framework for aligning LLMs with human preferences, vanilla RLHF assumes that all human preferences share the same distribution, preventing fine-tuned LLMs from generating personalized content when user preferences are diverse. In this work, we propose Personalized-RLHF (P-RLHF), an efficient framework that utilizes a lightweight user model to capture individual user preferences and jointly learns the user model and the personalized LLM from human feedback. P-RLHF exhibits the following three characteristics: It (1) enables an LLM to generate personalized content and scale efficiently with growing number of users; (2) handles both explicit user preferences described as textual input and implicit user preferences encoded in the feedback data; and (3) eliminates the need for users to fully articulate their preferences, which are normally needed for prompting LLMs to generate personalized content yet are often impractical to obtain in real-world scenarios. Our empirical results show that personalized LLMs trained using P-RLHF generate content more closely aligned with individual user preferences, outperforming vanilla, non-personalized RLHF across different tasks.

Chat is not available.