Poster
Hybrid Neural Autoencoders for Stimulus Encoding in Visual and Other Sensory Neuroprostheses
Jacob Granley · Lucas Relic · Michael Beyeler
Hall J (level 1) #523
Keywords: [ Inverse Problem ] [ Bionic Vision ] [ BCI ] [ Brain Computer Interfaces ] [ Computational Modeling ] [ Stimulus Encoding ] [ Sensory Neuroprostheses ] [ Autoencoder ] [ Perception ] [ vision ] [ Argus ] [ Retinal Prostheses ] [ Cortical Prostheses ]
Sensory neuroprostheses are emerging as a promising technology to restore lost sensory function or augment human capabilities. However, sensations elicited by current devices often appear artificial and distorted. Although current models can predict the neural or perceptual response to an electrical stimulus, an optimal stimulation strategy solves the inverse problem: what is the required stimulus to produce a desired response? Here, we frame this as an end-to-end optimization problem, where a deep neural network stimulus encoder is trained to invert a known and fixed forward model that approximates the underlying biological system. As a proof of concept, we demonstrate the effectiveness of this Hybrid Neural Autoencoder (HNA) in visual neuroprostheses. We find that HNA produces high-fidelity patient-specific stimuli representing handwritten digits and segmented images of everyday objects, and significantly outperforms conventional encoding strategies across all simulated patients. Overall this is an important step towards the long-standing challenge of restoring high-quality vision to people living with incurable blindness and may prove a promising solution for a variety of neuroprosthetic technologies.