Skip to yearly menu bar Skip to main content


Poster

Stimulus domain transfer in recurrent models for large scale cortical population prediction on video

Fabian Sinz · Alexander Ecker · Paul Fahey · Edgar Walker · Erick M Cobos · Emmanouil Froudarakis · Dimitri Yatsenko · Xaq Pitkow · Jacob Reimer · Andreas Tolias

Room 210 #25

Keywords: [ Neuroscience ] [ Neural Coding ]


Abstract:

To better understand the representations in visual cortex, we need to generate better predictions of neural activity in awake animals presented with their ecological input: natural video. Despite recent advances in models for static images, models for predicting responses to natural video are scarce and standard linear-nonlinear models perform poorly. We developed a new deep recurrent network architecture that predicts inferred spiking activity of thousands of mouse V1 neurons simultaneously recorded with two-photon microscopy, while accounting for confounding factors such as the animal's gaze position and brain state changes related to running state and pupil dilation. Powerful system identification models provide an opportunity to gain insight into cortical functions through in silico experiments that can subsequently be tested in the brain. However, in many cases this approach requires that the model is able to generalize to stimulus statistics that it was not trained on, such as band-limited noise and other parameterized stimuli. We investigated these domain transfer properties in our model and find that our model trained on natural images is able to correctly predict the orientation tuning of neurons in responses to artificial noise stimuli. Finally, we show that we can fully generalize from movies to noise and maintain high predictive performance on both stimulus domains by fine-tuning only the final layer's weights on a network otherwise trained on natural movies. The converse, however, is not true.

Live content is unavailable. Log in and register to view live content