Completed
Intro
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Can Learning Theory Resist Deep Learning? - Francis Bach, INRIA
Automatically move to the next video in the Classroom when playback concludes
- 1 Intro
- 2 Scientific context
- 3 Parametric supervised machine learning
- 4 Convex optimization problems
- 5 Exponentially convergent SGD for smooth finite sums
- 6 Exponentially convergent SGD for finite sums
- 7 Convex optimization for machine learning
- 8 Theoretical analysis of deep learning
- 9 Optimization for multi-layer neural networks
- 10 Gradient descent for a single hidden layer
- 11 Optimization on measures
- 12 Many particle limit and global convergence (Chizat and Bach, 2018a)
- 13 Simple simulations with neural networks
- 14 From qualitative to quantitative results ?
- 15 Lazy training (Chizat and Bach, 2018)
- 16 From lazy training to neural tangent kernel
- 17 Are state-of-the-art neural networks in the lazy regime?
- 18 Is the neural tangent kernel useful in practice?
- 19 Can learning theory resist deep learning?