Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Behavioral Machine Learning

Computational discovery of human reinforcement learning dynamics from choice behavior

Daniel Weinhardt · Maria Eckstein · Sebastian Musslick


Abstract:

This paper presents a novel machine learning approach for inferring interpretable human reinforcement learning models from behavioral data. By combining recurrent neural networks, sparse identification of nonlinear dynamics, and neural network ensemble training, we automate the discovery of underlying cognitive mechanisms from behavioral data. By constraining the network to a low-dimensional memory state, we extract latent dynamical system variables that represent human behavior. These variables are then used to identify interpretable sparse non-linear dynamics that describe how action values are updated based on cognitive mechanisms. To address the noise inherent in human behavior, we employ an ensemble training procedure of the network to ensure stable convergence. Our approach effectively recovers various ground truth models in a two-armed bandit task, demonstrating its ability to infer expressive yet interpretable models of human reinforcement learning.

Chat is not available.