Poster Session
in
Workshop: Scientific Methods for Understanding Neural Networks
Emergence of Hierarchical Emotion Representations in Large Language Models
Bo Zhao · Maya Okawa · Eric Bigelow · Rose Yu · Tomer Ullman · Hidenori Tanaka
As large language models (LLMs) increasingly power conversational agents, understanding how they represent, predict, and influence human emotions is crucial for ethical deployment. By analyzing probabilistic dependencies between emotional states in model outputs, we uncover hierarchical structures in LLMs' emotion representations. Our findings show that larger models, such as LLaMA 3.1 (405B parameters), develop more complex hierarchies. We also find that better emotional modeling enhances persuasive abilities in synthetic negotiation tasks, with LLMs that more accurately predict counterparts' emotions achieving superior outcomes. Additionally, we explore how persona biases, such as gender and socioeconomic status, affect emotion recognition, revealing frequent misclassifications of minority personas. This study contributes to both the scientific understanding and ethical considerations of emotion modeling in LLMs.