Poster
in
Workshop: Generalization in Planning (GenPlan '23)
Integrating Planning and Deep Reinforcement Learning via Automatic Induction of Task Substructures
Jung-Chun Liu · Chi-Hsien Chang · Shao-Hua Sun · Tian-Li Yu
Keywords: [ Reinforcement Learning ] [ classical planning ]
Despite recent advancements, deep reinforcement learning (DRL) still struggles at learning sparse-reward goal-directed tasks. On the other hand, classical planning excels at addressing hierarchical tasks by employing symbolic knowledge, yet most of the methods rely on assumptions about pre-defined subtasks, making them inapplicable to problems without domain knowledge or models. To bridge the best of both worlds, we propose a framework that integrates DRL with classical planning by automatically inducing task structures and substructures from a few demonstrations. Specifically, symbolic regression is used for substructure induction by adopting genetic programming where the program model reflects prior domain knowledge of effect rules. We compare the proposed framework to state-of-the-art DRL algorithms, imitation learning methods, and an exploration approach in various domains. Experimental results on various tasks show that our proposed framework outperforms all the abovementioned algorithms in terms of sample efficiency and task performance. Moreover, our framework achieves strong generalization performance by effectively inducing new rules and composing task structures. Ablation studies justify the design of our induction module and the proposed genetic programming procedure.