Poster
in
Workshop: MATH-AI: The 4th Workshop on Mathematical Reasoning and AI
Semantic Self-Consistency: Enhancing Language Model Reasoning via Semantic Weighting
Tim Knappe · Ryan L Li · Ayush Chauhan · Kaylee Chhua · Kevin Zhu · Sean O'Brien
Keywords: [ decision-making enhancement. ] [ filtering ] [ reasoning path embeddings ] [ anomalous results ] [ weighting algorithms ] [ vector embeddings ] [ chain-of-thought prompting ] [ semantic relevance ] [ aggregation ] [ fine-tuned BERT models ] [ Candidate responses ]
While large language models (LLMs) have rapidly improved their performance on a broad number of tasks, they still often fall short on reasoning tasks. As LLMs become more integrated in diverse real-world tasks, advancing their reasoning capabilities is crucial to their effectiveness in nuanced, complex problems. \citet{wang2023selfconsistency}’s \textit{self-consistency} framework reveals that sampling multiple rationales before taking a majority vote reliably improves model performance across various closed-answer reasoning tasks. Standard methods based on this framework aggregate the final decisions of these rationales but fail to utilize the semantic information detailed in the step-by-step reasoning paths. Our work introduces \textit{semantic self-consistency}, enhancing this approach by incorporating and analyzing both the reasoning paths of these rationales in addition to their final decisions before taking a majority vote. These methods not only improve the reliability of reasoning paths but also cause more robust performance on complex reasoning tasks.