Poster
in
Workshop: 6th Robot Learning Workshop: Pretraining, Fine-Tuning, and Generalization with Large Scale Models
Robot Fine-Tuning Made Easy: Pre-Training Rewards and Policies for Autonomous Real-World Reinforcement Learning
Jingyun Yang · Max Sobol Mark · Brandon Vu · Archit Sharma · Jeannette Bohg · Chelsea Finn
Keywords: [ reset-free RL ] [ autonomous fine-tuning ] [ Reinforcement Learning ]
The pre-train and fine-tune approach in machine learning has been highly successful across various domains, enabling rapid task learning by utilizing existing data and pre-trained models from the internet. We seek to apply this approach to robotic reinforcement learning, allowing robots to learn new tasks with minimal human involvement by leveraging online resources. We introduce RoboFuME, a reset-free fine-tuning system that pre-trains a versatile manipulation policy from diverse prior experience datasets and autonomously learns a target task with minimal human input. In real-world robot manipulation tasks, our method can incorporate data from an external robot dataset and improve performance on a target task in as little as 3 hours of autonomous real-world experience. We also evaluate our method against various baselines in simulation experiments. Website: https://tinyurl.com/robofume