Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Pluralistic Alignment Workshop

Multi-objective Reinforcement Learning: A Tool for Pluralistic Alignment

Peter Vamplew · Conor Hayes · Cameron Foale · Richard Dazeley · Hadassah Harland


Abstract:

Reinforcement learning (RL) is a valuable tool for the creation of AI systems. However it may be problematic to adequately align RL based on scalar rewards if there are multiple conflicting values or stakeholders to be considered. Over the last decade multi-objective reinforcement learning (MORL) using vector rewards has emerged as an alternative to standard, scalar RL. This paper provides an overview of the role which MORL can play in creating pluralistically-aligned AI.

Chat is not available.