Skip to yearly menu bar Skip to main content


Poster

ClevrSkills: Compositional Language And Visual Understanding in Robotics

Sanjay Haresh · Daniel Dijkman · Apratim Bhattacharyya · Roland Memisevic

[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Robotics tasks are highly compositional by nature. For example, to perform a high-level task like cleaning the table a robot must employ low-level capabilities of moving the effectors to the objects on the table, pick them up and then move them off the table one-by-one, while re-evaluating the consequently dynamic scenario in the process. Given that large vision language models (VLMs) have shown progress on many tasks that require high level, human like reasoning, we ask the question: if the models are taught the requisite low-level capabilities, can they compose them in novel ways to achieve interesting high-level tasks like cleaning the table without having to be explicitly taught so?To this end, we present ClevrSkills - a benchmark suite for compositional understanding in robotics. ClevrSkills is an environment suite developed on top of the ManiSkill2 simulator and an accompanying dataset. The dataset contains trajectories generated on a range of robotics tasks with language and visual annotations as well as multi-modal prompts as task specification. The suite includes a curriculum of tasks with three levels of compositional understanding, starting with simple tasks requiring basic motor skills. We benchmark multiple different VLM baselines on ClevrSkills and show that even after being pre-trained on large numbers of tasks, these models fail on compositional reasoning in robotics tasks.

Live content is unavailable. Log in and register to view live content