Skip to yearly menu bar Skip to main content


Poster

Seeing the forest and the tree: Building representations of both individual and collective dynamics with transformers

Ran Liu · Mehdi Azabou · Max Dabagia · Jingyun Xiao · Eva Dyer

Hall J (level 1) #431

Keywords: [ transformers ] [ population dynamics ] [ many-body systems ] [ multi-variate time-series ] [ neural activity ]


Abstract:

Complex time-varying systems are often studied by abstracting away from the dynamics of individual components to build a model of the population-level dynamics from the start. However, when building a population-level description, it can be easy to lose sight of each individual and how they contribute to the larger picture. In this paper, we present a novel transformer architecture for learning from time-varying data that builds descriptions of both the individual as well as the collective population dynamics. Rather than combining all of our data into our model at the onset, we develop a separable architecture that operates on individual time-series first before passing them forward; this induces a permutation-invariance property and can be used to transfer across systems of different size and order. After demonstrating that our model can be applied to successfully recover complex interactions and dynamics in many-body systems, we apply our approach to populations of neurons in the nervous system. On neural activity datasets, we show that our model not only yields robust decoding performance, but also provides impressive performance in transfer across recordings of different animals without any neuron-level correspondence. By enabling flexible pre-training that can be transferred to neural recordings of different size and order, our work provides a first step towards creating a foundation model for neural decoding.

Chat is not available.