Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Adaptive Foundation Models: Evolving AI for Personalized and Efficient Learning

Controlling Multimodal LLMs via Reward-guided Decoding

Oscar MaƱas · Pierluca D'Oro · Koustuv Sinha · Adriana Romero · Michal Drozdzal · Aishwarya Agrawal


Abstract:

As Multimodal Large Language Models (MLLMs) gain widespread applicability, it is becoming increasingly desirable to personalize them for diverse user needs. In this paper, we study the personalization of MLLMs by controlling their generation. To do so, we introduce the first method for reward-guided decoding of MLLMs. Our method is based on learning a reward model for visual grounding, and using it to guide the MLLM's decoding process. Our approach enables on-the-fly personalization of an MLLM's inference process in two ways: first, by giving control over the relative importance of reward and likelihood of candidate outputs during decoding, allowing a user to dynamically trade off object precision and recall in image captioning tasks; second, by giving control over the breadth of the search during decoding, allowing a user to trade off compute for output quality. We evaluate our method on standard object hallucination benchmarks, showing that it provides significant controllability over MLLM inference, while matching or surpassing the performance of existing visual grounding methods.

Chat is not available.