Poster
in
Workshop: 4th Robot Learning Workshop: Self-Supervised and Lifelong Learning
Maximum Likelihood Constraint Inference on Continuous State Spaces
Kaylene Stocking · David McPherson · Robert Matthew · Claire Tomlin
When a robot observes another agent unexpectedly modifying their behavior, inferring the most likely cause is a valuable tool for maintaining safety and reacting appropriately. In this work, we present a novel method for inferring constraints that works on continuous, possibly sub-optimal demonstrations. We first learn a representation of the continuous-state maximum entropy trajectory distribution using deep reinforcement learning. We then use Monte Carlo sampling from this distribution to generate expected constraint violation probabilities and perform constraint inference. When the agent's dynamics and objective function are known in advance, this process can be performed offline, allowing for real-time constraint inference at the moment demonstrations are observed. We demonstrate our approach on two continuous systems, including a human driving a model car.