Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Compositional Learning: Perspectives, Methods, and Paths Forward

GSR-Bench: A Benchmark for Grounded Spatial Reasoning Evaluation via Multimodal LLMs

Navid Rajabi · Jana Kosecka

Keywords: [ Vision Language Models ] [ Spatial Reasoning ] [ Multimodal Large Language Models ]


Abstract:

The ability to understand and reason about spatial relationships between objects in images is an important component of visual reasoning. This skill rests on recognizing and localizing objects of interest and determining their spatial relation. Early vision and language models (VLMs) have been shown to struggle to recognize spatial relations. We extend the previously released What'sUp dataset and propose a novel comprehensive evaluation for spatial relationship understanding that highlights the strengths and weaknesses of 9 Multimodal LLMs (MLLMs), in comparison with the 18 VLMs tested in What'sUp dataset. Our experiments encompass three classes of MLLMs that vary in their parameter sizes (ranging from 7B to 110B), training/instruction-tuning methods, and visual resolution to benchmark their performances and scrutinize the scaling laws in this task.

Chat is not available.