Skip to yearly menu bar Skip to main content


Poster

Bipartite expander Hopfield networks as self-decoding high-capacity error correcting codes

Rishidev Chaudhuri · Ila Fiete

East Exhibition Hall B, C #184

Keywords: [ Neuroscience and Cognitive Science ] [ Memory ] [ Neuroscience and Cognitive Science -> Neural Coding; Theory ] [ Information Theory ]


Abstract:

Neural network models of memory and error correction famously include the Hopfield network, which can directly store---and error-correct through its dynamics---arbitrary N-bit patterns, but only for ~N such patterns. On the other end of the spectrum, Shannon's coding theory established that it is possible to represent exponentially many states (~e^N) using N symbols in such a way that an optimal decoder could correct all noise upto a threshold. We prove that it is possible to construct an associative content-addressable network that combines the properties of strong error correcting codes and Hopfield networks: it simultaneously possesses exponentially many stable states, these states are robust enough, with large enough basins of attraction that they can be correctly recovered despite errors in a finite fraction of all nodes, and the errors are intrinsically corrected by the network’s own dynamics. The network is a two-layer Boltzmann machine with simple neural dynamics, low dynamic-range (binary) pairwise synaptic connections, and sparse expander graph connectivity. Thus, quasi-random sparse structures---characteristic of important error-correcting codes---may provide for high-performance computation in artificial neural networks and the brain.

Live content is unavailable. Log in and register to view live content