Avoiding Side Effects in Complex Environments
Alex Turner, Neale Ratzlaff, Prasad Tadepalli
Spotlight presentation: Orals & Spotlights Track 14: Reinforcement Learning
on 2020-12-08T20:10:00-08:00 - 2020-12-08T20:20:00-08:00
on 2020-12-08T20:10:00-08:00 - 2020-12-08T20:20:00-08:00
Poster Session 3 (more posters)
on 2020-12-08T21:00:00-08:00 - 2020-12-08T23:00:00-08:00
GatherTown: Reward/Reinforcement Learning ( Town A1 - Spot B3 )
on 2020-12-08T21:00:00-08:00 - 2020-12-08T23:00:00-08:00
GatherTown: Reward/Reinforcement Learning ( Town A1 - Spot B3 )
Join GatherTown
Only iff poster is crowded, join Zoom . Authors have to start the Zoom call from their Profile page / Presentation History.
Only iff poster is crowded, join Zoom . Authors have to start the Zoom call from their Profile page / Presentation History.
Toggle Abstract Paper (in Proceedings / .pdf)
Abstract: Reward function specification can be difficult. Rewarding the agent for making a widget may be easy, but penalizing the multitude of possible negative side effects is hard. In toy environments, Attainable Utility Preservation (AUP) avoided side effects by penalizing shifts in the ability to achieve randomly generated goals. We scale this approach to large, randomly generated environments based on Conway's Game of Life. By preserving optimal value for a single randomly generated reward function, AUP incurs modest overhead while leading the agent to complete the specified task and avoid many side effects. Videos and code are available at https://avoiding-side-effects.github.io/.