Overview
Syllabus
Introduction.
Online stochastic gradient descent.
Four reasons why the brain cannot do backprop.
Sources of supervision that allow backprop learning without a separate supervision signal.
The wake-sleep algorithm (Hinton et. al. 1995).
New methods for unsupervised learning.
Conclusion about supervision signals.
Can neurons communicate real values?.
Statistics and the brain.
Big data versus big models.
Dropout as a form of model averaging.
Different kinds of noise in the hidden activities.
How are the derivatives sent backwards?.
A fundamental representational decision: temporal derivatives represent error derivatives.
An early use of the idea that temporal derivatives encode error derivatives (Hinton & McClelland, 1988).
Combining STDP with reverse STDP.
If this is what is happening, what should neuroscientists see?.
What the two top-down passes achieve.
A way to encode the top-level error derivatives.
A consequence of using temporal derivatives to code error derivatives.
The next problem.
Now a miracle occurs.
Why does feedback alignment work?.
Taught by
Stanford Online