Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Compositional Learning: Perspectives, Methods, and Paths Forward

Transformer-based Imagination with Slot Attention

Yosuke Nishimoto · Takashi Matsubara

Keywords: [ world model ] [ reinforcement learning ] [ object-centric learning ]


Abstract:

World models have been proposed for improving the learning efficiency of deep reinforcement learning (RL) agents.However, it remains challenging for world models to effectively replicate environments that are high-dimensional, non-stationary, and comprising multiple objects and their interactions. We propose Transformer-based Imagination with Slot Attention (TISA), an RL agent that integrates a Transformer-based object-centric world model, policy function, and value function. The world model in TISA uses a Transformer-based architecture to handle each object's state, actions, and rewards (or costs) separately, effectively managing high-dimensional observations and preventing the combinatorial explosion of dynamics. Also, the Transformer-based policy and value functions can make decisions by considering the dynamics of individual objects and their interactions. In Safety-Gym benchmark, TISA outperforms a previous Transformer-based world model method.

Chat is not available.