Invited Talk
The Forward-Forward Algorithm for Training Deep Neural Networks
Geoffrey Hinton
Moderator : Kyunghyun Cho
Hall H (level 1)
Abstract:
I will describe a training algorithm for deep neural networks that does not require the neurons to propagate derivatives or remember neural activities. The algorithm can learn multi-level representations of streaming sensory data on the fly without interrupting the processing of the input stream. The algorithm scales much better than reinforcement learning and would be much easier to implement in cortex than backpropagation.