Poster
in
Workshop: Generalization in Planning (GenPlan '23)
Forecaster: Towards Temporally Abstract Tree-Search Planning from Pixels
Thomas Jiralerspong · Flemming Kondrup · Doina Precup · Khimya Khetarpal
Keywords: [ tree-search ] [ Planning ] [ Temporal Abstraction ] [ Reinforcement Learning ] [ World model ]
The ability to plan at many different levels of abstraction enables agents to envision the long-term repercussions of their decisions and thus enables sample-efficient learning. This becomes particularly beneficial in complex environments from high-dimensional state space such as pixels, where the goal is distant and the reward sparse. We introduce Forecaster, a deep hierarchical reinforcement learning approach which plans over high-level goals leveraging a temporally abstract world model. Forecaster learns an abstract model of its environment by modelling the transitions dynamics at an abstract level and training a world model on such transition. It then uses this world model to choose optimal high-level goals through a tree-search planning procedure. It additionally trains a low-level policy that learns to reach those goals. Our method not only captures building world models with longer horizons, but also, planning with such models in downstream tasks. We empirically demonstrate Forecaster's potential in both single-task learning and generalization to new tasks in the AntMaze domain.