Poster
in
Workshop: Adaptive Foundation Models: Evolving AI for Personalized and Efficient Learning
Towards Personalized Language Models via Inference-time Human Preference Optimization
Nikki Lijing Kuang · Wei Sun · Scott McFaddin · Yian Ma · Markus Ettl
The impressive generative capabilities of large language models (LLMs) have led to their widespread adoption across diverse applications. However, existing alignment methods, which primarily focusing on optimizing for the \textit{general human preferences} such as safety, fairness, and trustworthiness, rely heavily on expensive fine-tuning processes. These approaches lack scalability and adaptability when addressing \textit{personal preferences}, as they assume shared preferences across different users. In this paper, we introduce a novel approach to LLM alignment that enables personalized interaction with LLMs based on decode-time frameworks. Our approach enables dynamic adaptation to personal preferences during inference, providing a flexible and computationally efficient solution for personalization without the need of training-time interventions. We demonstrate the effectiveness of our method across different benchmark datasets and tasks, showing that it improves the ability of LLMs to meet diverse personal requirements compared to the existing alignment methods.