Skip to yearly menu bar Skip to main content


Poster

EAI: Emotional Decision-Making of LLMs in Strategic Games and Ethical Dilemmas

Mikhail Mozikov · Nikita Severin · Mikhail Baklashkin · Maria Glushanina · Ivan Nasonov · Valeria Bodishtianu · Daniil Orekhov · Ivan Makovetskiy · Vasily Lavrentyev · Pekhotin Vladislav · Akim Tsvigun · Denis Turdakov · Tatiana Shavrina · Andrey Savchenko · Ilya Makarov

[ ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

One of the urgent tasks of artificial intelligence is to assess the safety and alignment of large language models (LLMs) with human behavior. Conventional verification in natural language processing problems only can be insufficient. Since human decisions are typically influenced by emotions, this paper studies the LLMs' alignment in complex strategic and ethical environments with an in-depth analysis of the drawbacks of our psychology and emotional impact on decision-making. We introduce the novel EAI framework for integrating emotion modeling into LLMs to examine the emotional impact on ethics and LLM-based decision-making in a wide range of strategic games, including bargaining and repeated games. Our experimental study with various LLMs demonstrated that emotions can significantly alter the ethical decision-making landscape of LLMs, highlighting the need for robust mechanisms to ensure consistent ethical standards. The game-theoretic assessment showed that proprietary LLMs are prone to emotion biases that increase with decreasing model size or working with non-English languages. Moreover, adding emotions lets the LLMs increase the cooperation rate during the game.

Live content is unavailable. Log in and register to view live content