Poster
in
Workshop: NeuroAI: Fusing Neuroscience and AI for Intelligent Solutions
RNN Replay: Leakage and Underdamped Dynamics
Josue Casco-Rodriguez · Richard Baraniuk
One hallmark of neural processing is the ability to dream or replay realistic sequences without any input.Recent work shows that denoising recurrent neural networks (RNNs) implicitly learn the score function of their hidden states, and can thus dream of realistic sequences via Langevin sampling.However, the current theory of Langevin sampling in RNNs fails to identify the nature of the score function, the impact of architectural choices like leakage and adaptation, or how to improve Langevin sampling in RNNs.We rectify these failures by: (1) using Markov Gaussian processes to explain how the score function can be difficult to approximate, but admits a form that readily incorporates leakage; (2) show that adaptation induces a form of underdamped Langevin sampling; and (3) propose a more direct and effective form of underdamped Langevin sampling for RNNs.