Skip to yearly menu bar Skip to main content


Poster
in
Workshop: MATH-AI: The 4th Workshop on Mathematical Reasoning and AI

On Memorization of Large Language Models in Logical Reasoning

Chulin Xie · Yangsibo Huang · Chiyuan Zhang · Da Yu · Xinyun Chen · Bill Yuchen Lin · Bo Li · Badih Ghazi · Ravi Kumar

Keywords: [ knight and knave ] [ logical reasoning ] [ LLM ] [ perturbation ] [ memorization ]


Abstract: Large language models (LLMs) show good performance on some complicated reasoning tasks, yet could also make the most basic reasoning mistakes. This contrasting behavior is puzzling when it comes to understanding the mechanisms behind LLMs' reasoning capabilities. One hypothesis is that the increasingly high and nearly saturated performance on common reasoning benchmarks could be due to the memorization of similar benchmark problems accidentally leaked into the training data.In this paper, we systematically investigate this problem with a measurement of memorization in reasoning tasks inspired by human behaviors, and a dynamically generated logical reasoning benchmark based on Knights and Knaves puzzles. We found that LLMs could interpolate the training puzzles (achieving $\sim100$% accuracy) after fine-tuning, yet fail when those puzzles are slightly perturbed, suggesting that the models heavily rely on memorization to solve those training puzzles. On the other hand, we show that LLMs learn to reason while interpolating the training set. At higher level of memorization, the model not only solves more unseen test puzzles, but also solves them relatively robustly under perturbation. This phenomenon suggests that LLMs exhibit a complex interplay between memorization and genuine reasoning abilities, and reveals an interesting direction for future research. Our code and data are available at https://memkklogic.github.io/.

Chat is not available.