Completed
Optimality of mini-batch size 1
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Fit Without Fear - An Over-Fitting Perspective on Modern Deep and Shallow Learning
Automatically move to the next video in the Classroom when playback concludes
- 1 Intro
- 2 Supervised ML
- 3 Interpolation and Overfitting
- 4 Modern ML
- 5 Fit without Fear
- 6 Overfitting perspective
- 7 Kernel machines
- 8 Interpolation in kernels
- 9 Interpolated classifiers work
- 10 what is going on?
- 11 Performance of kernels
- 12 Kernel methods for big data
- 13 The limits of smooth kernels
- 14 Eigenpro: practical implementation
- 15 Comparison with state-of-the-art
- 16 Improving speech intelligibility
- 17 Stochastic Gradient Descent
- 18 The Power of Interpolation
- 19 Optimality of mini-batch size 1
- 20 Minibatch size?
- 21 Real data example
- 22 Learning kernels for parallel computation?
- 23 Theory vs practice
- 24 Model complexity of interpolation?
- 25 How to test model complexity?
- 26 Testing model complexity for kernels
- 27 Levels of noise
- 28 Theoretical analyses fall short
- 29 Simplicial interpolation A-fit
- 30 Nearly Bayes optimal
- 31 Parting Thoughts I