Poster
in
Workshop: Socially Responsible Language Modelling Research (SoLaR)
Investigating Goal-Aligned and Empathetic Social Reasoning Strategies for Human-Like Social Intelligence in LLMs
Anirudh Gajula · Raaghav Malik
Keywords: [ Interaction ] [ Evaluation ] [ Transformers ] [ Large Language Models ] [ Agent ] [ Social ] [ Theory of Mind ] [ Social intelligence ]
One key attribute of human-like intelligence is theory of mind, an essential capacity for navigating complex social landscapes, fostering empathy, effective persuasion, and collaboration. Artificial theory of mind capabilities can be key for conflict resolution, enhanced human-computer interaction, and for building more human-aligned systems. In this study, we explore 3 social reasoning strategies inspired by human psychology: Belief-Desire-Intent (BDI), Emotional Modeling and Processing (EMP), and Multiple Response Optimization (MRO). We evaluate all combinations of these strategies in changing how agents collaborate, compete, and make plans or deals in a variety of complex social scenarios provided by the SOTOPIA and SOTOPIA-Eval benchmark framework. By simulating interactions and evaluating the models using SOTOPIA-Eval, we found notable differences in social intelligence when different social reasoning strategies were used for GPT-3.5-turbo. Specifically, we observed that all reasoning strategies result in higher believability scores, indicating more human-like dialogue. However, this comes at the cost of models being more persistent in accomplishing their own goals, especially with BDI reasoning, which generally results in lower relationship scores. Combinations of strategies balance out such effects: overall, EMP performs the best, followed by BDI+MRO and BDI+EMP+MRO. These results suggest the importance of such strategies in enhancing and guiding various social intelligence metrics and in developing personality, and demonstrate the usage of practical reasoning to improve the social intelligence of large language models.