Poster
in
Workshop: MATH-AI: The 4th Workshop on Mathematical Reasoning and AI
CAFA: Coding as Auto-Formulation Can Boost Large Language Models in Solving Linear Programming Problem
Haoxuan Deng · Bohao Zheng · YURI JIANG · Trung Tran
Keywords: [ Linear Programming ] [ machine mathematical reasoning ] [ large language mode ] [ operations research ]
Large language models (LLMs) open new doors for Operations Research (OR). While initial studies explored multi-agent strategies for LLMs in OR, our research challenges the assumption that such complex multi-step pipelines unnecessarily yield superior results for Linear Programming (LP) problems. This paper introduces a streamlined methodology: Coding as Auto-Formulation (CAFA). In comparison, CAFA is only one compact prompt guiding the LLMs to formalize the given problem text into lines of codes. The generated code will be post-processing for execution to get the answer. The proposed methods is tested on the NL4OPT dataset with different LLMs. Results suggest that despite its simplicity, consistently enhances LP problem-solving accuracy across different models. This study aims to shed light on better unleashing LLMs' mathematical reasoning capability with more streamlined prompts.