Skip to yearly menu bar Skip to main content


Spotlight Poster

Humanoid Locomotion as Next Token Prediction

Ilija Radosavovic · Jathushan Rajasegaran · Baifeng Shi · Bike Zhang · Sarthak Kamat · Koushil Sreenath · Trevor Darrell · Jitendra Malik

East Exhibit Hall A-C #4003
[ ]
Thu 12 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

We cast real-world humanoid control as a next token prediction problem, akin to predicting the next word in language. Our model is a causal transformer trained via autoregressive prediction of sensorimotor sequences. To account for the multi-modal nature of the data, we perform prediction in a modality-aligned way, and for each input token predict the next token from the same modality. This general formulation enables us to leverage data with missing modalities, such as videos without actions. We train our model on a dataset of sequences from prior neural network policies, model-based controllers, motion capture, and YouTube videos of humans. We show that our model enables a real humanoid robot to walk in San Francisco zero-shot. Our model can transfer to the real world even when trained on only 27 hours of walking data, and can generalize to commands not seen during training. These findings suggest a promising path toward learning challenging real-world control tasks by generative modeling of sensorimotor sequences.

Live content is unavailable. Log in and register to view live content