Poster
in
Workshop: UniReps: Unifying Representations in Neural Models
Degradation and plasticity in convolutional neural networks: An investigation of internal representations
Jasmine Moore · Vibujithan Vigneshwaran · Matthias Wilms · Nils Daniel Forkert
The architecture and information processing of convolutional neural networks was originally heavily inspired by the biological visual system. In this work, we make use of these similarities to create an in silico model of neurodegenerative diseases affecting the visual system. We examine layer-wise internal representations and accuracy levels of the model as it is subjected to synaptic decay and retraining to investigate if it is possible to capture a biologically realistic profile of visual cognitive decline. Therefore, we progressively decay and freeze model synapses in a highly compressed model trained for object recognition. Between each iteration of progressive model degradation, we retrain the remaining unaffected synapses on subsets of initial training data to simulate continual neuroplasticity. The results of this work show that even with high levels of synaptic decay and limited retraining data, the model is able to regain internal representations similar to that of the unaffected, healthy model. We also demonstrate that throughout a complete cycle of model degradation, the early layers of the model retain high levels of centered kernel alignment similarity, while later layers containing high-level information are much more susceptible to deviate from the healthy model.