Poster
in
Workshop: Information-Theoretic Principles in Cognitive Systems (InfoCog)
Large Language Models Behave (Almost) As Rational Speech Actors: Insights From Metaphor Understanding
Gaia Carenini · Louis Bodot · Luca Bischetti · Walter Schaeken · Valentina Bambini
Abstract:
What are the inner workings of large language models? Can they perform pragmatic inference? This paper attempts to characterize from a mathematical angle the cognitive processes of large language models involved in metaphor understanding. Specifically, we show that GPT models embody reasoning mechanisms that resemble the Rational Speech Act model for metaphors, which has already been used to grasp the principles of human pragmatic inference in dealing with figurative language. Our research contributes to the field of explainability and interpretability of large language models and highlights the usefulness of adopting a Bayesian model of human cognition to gain insights into the pragmatics of conversational agents.
Chat is not available.