Completed
Interpolation in deep auto-encoders
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Beyond Empirical Risk Minimization - The Lessons of Deep Learning
Automatically move to the next video in the Classroom when playback concludes
- 1 Intro
- 2 The ERM/SRM theory of learning
- 3 Unifom laws of large numbers
- 4 Capacity control
- 5 U-shaped generalization curve
- 6 Does interpolation overfit?
- 7 Interpolation does not averfit even for very noisy data
- 8 why bounds fail
- 9 Interpolation is best practice for deep learning
- 10 Historical recognition
- 11 The key lesson
- 12 Generalization theory for interpolation?
- 13 A way forward?
- 14 Interpolated k-NN schemes
- 15 Interpolation and adversarial examples
- 16 Double descent risk curve
- 17 More parameters are better: an example
- 18 Random Fourier networks
- 19 what is the mechanism?
- 20 Double Descent in Randon Feature settings
- 21 Smoothness by averaging
- 22 Framework for modern ML
- 23 The landscape of generalization
- 24 Optimization: classical
- 25 Modern Optimization
- 26 From classical statistics to modern ML
- 27 The nature of inductive bias
- 28 Memorization and interpolation
- 29 Interpolation in deep auto-encoders
- 30 Neural networks as models for associative memory
- 31 Why are attractors surprising?
- 32 Memorizing sequences