Poster
in
Workshop: Deep Reinforcement Learning
A Framework for Efficient Robotic Manipulation
Albert Zhan · Ruihan Zhao · Lerrel Pinto · Pieter Abbeel · Misha Laskin
Recent advances in unsupervised representation learning significantly improved the sample efficiency of training Reinforcement Learning policies in simulated environments. However, similar gains have not yet been seen for real-robot learning. In this work, we focus on enabling data-efficient real-robot learning from pixels. We present a Framework for Efficient Robotic Manipulation (FERM), a method that utilizes data augmentation and unsupervised learning to achieve sample-efficient training of real-robot arm policies from sparse rewards. While contrastive pre-training, data augmentation, and demonstrations are alone insufficient for efficient learning, our main contribution is showing that the combination of these disparate techniques results in a simple yet data-efficient method. We show that, given only 10 demonstrations, a single robotic arm can learn sparse-reward manipulation policies from pixels, such as reaching, picking, moving, pulling a large object, flipping a switch, and opening a drawer in just 30 minutes of mean real-world training time.