Poster
in
Workshop: Workshop on Behavioral Machine Learning
Assessing Behavioral Alignment of Personality-Driven Generative Agents in Social Dilemma Games
Ritwik Bose · Mattson Ogg · Michael Wolmetz · Christopher Ratto
Proxies of human behavior using large language models (LLMs) have been demonstrated in limited settings where their actions appear to be plausible. In this study, we examine the variation and fidelity of observed behaviors in LLM agents with respect to the "Big Five" personality traits. Experiments based on two social dilemma games were conducted using LLM agents whose prompts included their personality profile and whether or not the agent could reflect on past rounds of the game. Results indicate that behavioral outcomes can be influenced by stipulating the magnitude of an agent’s personality traits. Comparing these results with human studies indicates some degree of behavioral alignment and highlights gaps that stand in the way of accurately emulating human behavior.