Skip to yearly menu bar Skip to main content


Poster
in
Workshop: GenAI for Health: Potential, Trust and Policy Compliance

Prioritization Strategies for LLM-Designed Restless Bandit Rewards in Public Health

Shresth Verma · Niclas Boehmer · Lingkai Kong · Milind Tambe

Keywords: [ public health ] [ Mobile Health ] [ Reinforcement Learning ] [ Reward Generation ] [ Restless Bandits ]


Abstract:

LLMs are increasingly used to design reward functions based on human preferences in Reinforcement Learning (RL). We focus on LLM-designed rewards for Restless Multi-Armed Bandits, a framework for allocating limited resources among agents. In applications such as public health, this approach empowers grassroots health workers to tailor automated allocation decisions to community needs. In the presence of multiple agents, altering the reward function based on human preferences can impact subpopulations very differently, leading to complex tradeoffs and a multi-objective resource allocation problem. We are the first to present a principled method termed Social Choice Language Model for dealing with these tradeoffs for LLM-designed rewards for multiagent planners in general and restless bandits in particular. The novel part of our model is a transparent and configurable selection component, called an adjudicator, external to the LLM that controls complex tradeoffs via a user-selected social welfare function. Our experiments demonstrate that our model reliably selects more effective, aligned, and balanced reward functions compared to purely LLM-based approaches.

Chat is not available.