Skip to yearly menu bar Skip to main content


Poster

Mr.Bean: A Comprehensive Meta-Reasoning Benchmark for Analyzing Large Language Models

Zhongshen Zeng · Yinhong Liu · Yingjia Wan · Jingyao Li · Pengguang Chen · Jianbo Dai · Yuxuan Yao · Rongwu Xu · Zehan Qi · Wanru Zhao · Linling Shen · Jianqiao Lu · Haochen Tan · Yukang Chen · Hao Zhang · Zhan Shi · Bailin Wang · Zhijiang Guo · Jiaya Jia

[ ] [ Project Page ]
Wed 11 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Large language models (LLMs) have shown increasing capability in problem-solving and decision-making, largely based on the step-by-step chain-of-thought reasoning processes. However, it has been increasingly challenging to evaluate the reasoning capability of LLMs. Concretely, existing outcome-based benchmarks begin to saturate and become less sufficient to monitor the progress. To this end, we present a process-based benchmark Mr.Bean that demands a meta reasoning skill, where LMs are asked to locate and analyse potential errors in automatically generated reasoning steps. Mr.Bean is a comprehensive benchmark comprising 6,006 questions collected from human experts, covering various subjects such as physics, chemistry, logic, coding, and more.Through our designed metrics for assessing meta-reasoning on this benchmark, we identify interesting limitations and weakness of current LLMs (either open-source or close-source models). For example, open-source models are seemingly comparable to GPT-4 on outcome-based benchmarks, but they lag far behind on our benchmark, revealing the underlying reasoning capability gap between them.Our anonymous dataset and codes are submitted with the paper and will be publicly available.

Live content is unavailable. Log in and register to view live content