Skip to yearly menu bar Skip to main content


Poster

Long-Horizon Planning for Multi-Agent Robots in Partially Observable Environments

Sid Nayak · Adelmo Morrison Orozco · Marina Have · Jackson Zhang · Vittal Thirumalai · Darren Chen · Aditya Kapoor · Eric Robinson · Karthik Gopalakrishnan · James Harrison · Anuj Mahajan · Brian Ichter · Hamsa Balakrishnan

Poster Room - TBD
[ ] [ Project Page ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

The ability of Language Models (LMs) to understand natural language makes them a powerful tool for parsing human instructions into task plans for autonomous robots. Unlike traditional planning methods that rely on domain-specific knowledge and handcrafted rules, LMs generalize from diverse data and adapt to various tasks with minimal tuning, acting as a compressed knowledge base. However, LMs in their standard form face challenges with long-horizon tasks, particularly in partially observable multi-agent settings. We propose an LM-based Long-Horizon Planner for Multi-Agent Robotics (LLaMAR), a cognitive architecture for planning that achieves state-of-the-art results in long-horizon tasks within partially observable environments. LLaMAR employs a plan-act-correct-verify framework, allowing self-correction during action execution without relying on oracles or simulators. Additionally, we present MAP-THOR, a comprehensive test suite encompassing household tasks of varying complexity within the AI2-THOR environment. Experiments show that LLaMAR achieves a 30\% higher success rate compared to other state-of-the-art LM-based multi-agent planners.

Live content is unavailable. Log in and register to view live content