Poster
in
Workshop: MATH-AI: The 3rd Workshop on Mathematical Reasoning and AI
Basic Arithmetic Properties in the Space of Language Model Prompts
Mateusz KrubiĆski
Keywords: [ prompting ] [ Arithmetic Properties ] [ Large language models ]
Large pre-trained neural language models (LLMs) that can effectively utilize enormous amounts of unlabeled textual data have recently changed the whole field of Natural Language Processing. By utilizing prompting techniques enabled by the in-context learning capabilities, LLMs have been shown to perform on par with dedicated models trained for downstream tasks. One such task is numerical reasoning and, in particular, the ability to conduct basic arithmetic operations. The question we wish to answer is whether the basic properties of arithmetic operations, such as the commutative property, hold in the space of LLM prompts - does asking the LLM model to compute 13+37 vs 37+13 result, on average, in the same outcome? In contrast to previous works, which reported Accuracy only, we take a closer look (MAE, Pearson's R) at the error distribution to better understand the performance with regards to scaling laws.