Poster
in
Workshop: MATH-AI: The 4th Workshop on Mathematical Reasoning and AI
AI-Assisted Generation of Difficult Math Questions
Vedant Shah · Dingli Yu · Kaifeng Lyu · Simon Park · Jiatong Yu · Yinghui He · Nan Rosemary Ke · Michael Mozer · Yoshua Bengio · Sanjeev Arora · Anirudh Goyal
Keywords: [ Skill Composition ] [ mathematical reasoning ] [ Synthetic Data ]
Abstract:
Current LLM training positions mathematical reasoning as a core capability. With publicly available sources fully tapped, there is an unmet demand for diverse and challenging mathematics questions. Relying solely on human experts is both time-consuming and costly, while LLM-generated questions often lack the requisite diversity and difficulty. We present a design framework that combines the strengths of LLMs with a human-in-the-loop approach to generate a diverse array of challenging math questions. Initially, leveraging LLM metacognition skills [Didolkar et al., 2024], a strong LLM is used to extract core `"skills'' from existing math datasets. These skills serve as the basis for generating novel and difficult questions by prompting the LLM with random pairs of core skills that must be utilized in the question. The use of two very different skills within each question makes finding such questions an "out of distribution'' task for both LLMs and humans. Our pipeline employs LLMs to iteratively generate and refine questions and solutions through multi-turn prompting. Human annotators then verify and further refine the questions, with their efficiency enhanced via further LLM interactions. Applying this pipeline on skills extracted from MATH dataset [Hendrycks et al., 2024] resulted in **MATH$^2$** - a dataset of higher quality math questions, as evidenced by lower performance of all models on MATH$^2$ than on MATH. Although focused on mathematics, our methodology seems applicable to other domains requiring structured reasoning, and potentially as a component of {\em scalable oversight}. Also of interest is a striking relationship observed between models' performance on the new dataset: the success rate on MATH$^2$ is the square on MATH. This suggests that successfully solving the question in MATH$^2$ requires a nontrivial combination of two distinct math skills.
Chat is not available.