Skip to yearly menu bar Skip to main content


Poster
in
Workshop: MATH-AI: The 4th Workshop on Mathematical Reasoning and AI

TurtleBench: A Visual Programming Benchmark in Turtle Geometry

Sina Rismanchian · Yasaman Razeghi · Sameer Singh · Shayan Doroudi

Keywords: [ Large Multimodal Models ] [ Geometry ] [ mathematical reasoning ]


Abstract:

While formal geometric reasoning may be difficult for humans without extensive training, humans seem to have the ability to intuitively reason about geometric patterns in images and scenes from a young age. In contrast, developing large multimodal models (LMMs) capable of similar feats represents a frontier in AI research. We introduce TurtleBench, a benchmark designed to evaluate LMMs' capacity to interpret geometric patterns—given visual examples, textual instructions, or both—and generate precise code outputs. Inspired by turtle geometry, a notion used to teach children foundational coding and geometric concepts, TurtleBench features tasks with patterned shapes that have underlying algorithmic logic. Unlike object detection tasks that typically do not involve understanding underlying patterns, this benchmark combines geometrical reasoning with image understanding. Our evaluation reveals that leading LMMs struggle significantly with these tasks, with GPT-4V achieving only 19% accuracy on the simplest tasks. TurtleBench highlights the gap between human and AI performance in intuitive and visual geometrical understanding, setting the stage for future research in this area.

Chat is not available.