Completed
Intro
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Gradient Descent on Infinitely Wide Neural Networks - Global Convergence and
Automatically move to the next video in the Classroom when playback concludes
- 1 Intro
- 2 Machine learning Scientific context
- 3 Parametric supervised machine learning
- 4 Convex optimization problems
- 5 Theoretical analysis of deep learning
- 6 Optimization for multi-layer neural networks
- 7 Gradient descent for a single hidden layer
- 8 Wasserstein gradient flow
- 9 Many particle limit and global convergence (Chizat and Bach, 2018)
- 10 From optimization to statistics
- 11 Interpolation regime
- 12 Logistic regression for two-layer neural networks
- 13 From RKHS norm to variation norm
- 14 Kernel regime
- 15 Optimizing over two layers
- 16 Comparison of kernel and feature learning regimes
- 17 Discussion
- 18 Conclusion