Skip to yearly menu bar Skip to main content


Poster

ALPINE: Unveiling The Planning Capability of Autoregressive Learning in Language Models

Siwei Wang · Yifei Shen · Shi Feng · Haoran Sun · Shang-Hua Teng · Wei Chen

[ ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Planning constitutes a crucial element of both human intelligence and contemporary large language models (LLMs). In this paper, we initiate a theoretical investigation into the development of planning capabilities in Transformer-based language models through their next-word prediction mechanisms, aiming to identify potential limitations in their planning abilities. For theoretical simplicity, we abstract real-world planning as a network path-finding task of generating a valid path from a given source node to a given target node. In terms of expressiveness, we show that the Transformer is capable of executing path-finding by embedding the adjacency and reachability matrices within its weights. The learning dynamic analysis reveals that the Transformer can learn both the adjacency matrix and a limited form of the reachability matrix. The incomplete form of reachability reveals that autoregressive language models cannot identify reachability relationships through transitivity, and thus would fail when path concatenation is needed to generate a path. These theoretical insights are then validated through experiments on a synthetic path-finding dataset and a real-world planning dataset named Blocksworld, which demonstrate that indeed the Transformer executes path-finding through the adjacency matrix and the incomplete reachability matrix encoded in its weights.

Live content is unavailable. Log in and register to view live content