Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Stanford University

Stanford Seminar - Can the Brain Do Back-Propagation?

Stanford University via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a Stanford seminar examining the brain's capacity for back-propagation in neural networks. Delve into online stochastic gradient descent, challenges preventing the brain from performing backprop, and alternative sources of supervision. Investigate the wake-sleep algorithm, unsupervised learning methods, and the brain's ability to communicate real values. Analyze the relationship between statistics and neuroscience, comparing big data to big models. Examine dropout as a form of model averaging and various types of noise in hidden activities. Learn about the transmission of derivatives, temporal derivatives as error representations, and the combination of STDP with reverse STDP. Discover potential neuroscientific observations, the purpose of top-down passes, and methods for encoding top-level error derivatives. Investigate feedback alignment and its effectiveness in neural networks.

Syllabus

Introduction.
Online stochastic gradient descent.
Four reasons why the brain cannot do backprop.
Sources of supervision that allow backprop learning without a separate supervision signal.
The wake-sleep algorithm (Hinton et. al. 1995).
New methods for unsupervised learning.
Conclusion about supervision signals.
Can neurons communicate real values?.
Statistics and the brain.
Big data versus big models.
Dropout as a form of model averaging.
Different kinds of noise in the hidden activities.
How are the derivatives sent backwards?.
A fundamental representational decision: temporal derivatives represent error derivatives.
An early use of the idea that temporal derivatives encode error derivatives (Hinton & McClelland, 1988).
Combining STDP with reverse STDP.
If this is what is happening, what should neuroscientists see?.
What the two top-down passes achieve.
A way to encode the top-level error derivatives.
A consequence of using temporal derivatives to code error derivatives.
The next problem.
Now a miracle occurs.
Why does feedback alignment work?.

Taught by

Stanford Online

Reviews

Start your review of Stanford Seminar - Can the Brain Do Back-Propagation?

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.