Poster
in
Workshop: MATH-AI: The 4th Workshop on Mathematical Reasoning and AI
SBSC: Step-by-Step Coding for Improving Mathematical Olympiad Performance
Kunal Singh · Ankan Biswas · Sayandeep Bhowmick · Pradeep Moturi
Keywords: [ LLM Math Reasoning ] [ Olympiad-Math ] [ AI Math Reasoning ]
We propose Step-by-Step Coding (SBSC): a multi-turn math reasoning frameworkthat enables Large Language Models (LLMs) to generate sequence of programsfor solving Olympiad level math problems. After each turn/step, by leveragingthe code execution outputs and programs of previous steps, the model generatesthe next sub-task and the corresponding program to complete it. SBSC allowsmore granular, flexible and precise approach to problem-solving compared toexisting methods. Extensive experiments highlight the effectiveness of SBSC intackling competition and Olympiad-level math problems. For Claude-3.5-Sonnet,we observe SBSC (greedy decoding) surpasses existing state-of-the-art (SOTA)program generation based reasoning strategies by absolute 10.7% on AMC12, 8%on AIME and 12.6% on MathOdyssey. Given SBSC is multi-turn in nature, we alsobenchmark SBSC’s greedy decoding against self-consistency decoding results ofexisting SOTA math reasoning strategies and observe performance gain by absolute6.2% on AMC, 6.7% on AIME and 7.4% on MathOdyssey.