Poster
in
Workshop: Bayesian Decision-making and Uncertainty: from probabilistic and spatiotemporal modeling to sequential experiment design
Active Inverse Reinforcement Learning with Full Trajectories
Ondrej Bajgar · Dewi Gould · Jonathon Liu · Oliver Newcombe · Rohan Mitta · Jack Golden
Keywords: [ inverse reinforcement learning ] [ Active Learning ]
As AI systems become increasingly autonomous, aligning their decision-making to human preferences is essential. In domains like autonomous driving or robotics, it is impossible to write down the reward function representing these preferences by hand. Inverse reinforcement learning (IRL) offers a promising approach to infer the unknown reward from demonstrations. However, obtaining human demonstrations can be costly. Active IRL addresses this challenge by strategically selecting the most informative scenarios for human demonstration, reducing the amount of required human effort. Where prior work allowed querying the human for an action at one state at a time, we motivate and analyse scenarios where we collect longer trajectories. We provide an information-theoretic acquisition function, propose an efficient approximation scheme, and illustrate its performance by a set of gridworld experiments as groundwork for future work expanding to more general settings.