Poster
Learning Conjunctive Representations
Markus Pettersen · Frederik Rogge · Mikkel Lepperød
Hippocampal place cells are known for their spatially selective firing patterns, which has led to the suggestion that they encode an animal's location. However, place cells also respond to contextual cues, such as smell. Furthermore, they have the ability to remap, wherein the firing fields and rates of cells change in response to environmental changes. How place cell responses emerge, and how these representations remap is not fully understood. In this work, we propose a similarity-based objective function that translates proximity in space, to proximity in representation. We show that a neural network trained to minimize the proposed objective learns place-like representations. We also show that the proposed objective is trivially extended to include other sources of information, such as context information, in the same way. When trained to encode multiple contexts in conjunction with location, networks learn distinct representations and show remapping behaviors across contexts. The proposed objective is invariant to distance-preserving transformations. Such transformations (e.g. rotations) of the original trained representation, therefore yield new representations distinct from the original, without explicit relearning, akin to remapping. Our findings shed new light on the formation and encoding properties of place cells, and also demonstrate a striking case of representational reuse.
Live content is unavailable. Log in and register to view live content