Talk
in
Workshop: Deep Reinforcement Learning
Invited Talk: George Konidaris - Signal to Symbol (via Skills)
George Konidaris
I will discuss a route to general AI where a general-purpose agent (which must have a complex, high-dimensional sensorimotor space) first autonomously learns abstract, task-specific representations - that reflects the complexity of the particular task the agent is currently solving, and not the agent itself - and then applies an appropriate generic solution method to the resulting abstract task. I will argue that such a representation can be learned via a combination of state- and action-abstractions. I will present my group's recent progress on learning abstract actions in the form of high-level options or skills. I will then consider the question of how to learn a compatible abstract state representation, taking a constructivist approach, where the computation the representation is required to support - here, planning using a set of (learned or given) skills - is precisely defined, and then its properties are used to build a representation capable of doing so by construction. The result is a formal link between state and action abstractins. I will present an example of a robot autonomously learning a (sound and complete) abstract representation directly from sensorimotor data, and then using it to plan.