Beyond Empirical Risk Minimization - The Lessons of Deep Learning

Beyond Empirical Risk Minimization - The Lessons of Deep Learning

MITCBMM via YouTube Direct link

Intro

1 of 32

1 of 32

Intro

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Beyond Empirical Risk Minimization - The Lessons of Deep Learning

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Intro
  2. 2 The ERM/SRM theory of learning
  3. 3 Unifom laws of large numbers
  4. 4 Capacity control
  5. 5 U-shaped generalization curve
  6. 6 Does interpolation overfit?
  7. 7 Interpolation does not averfit even for very noisy data
  8. 8 why bounds fail
  9. 9 Interpolation is best practice for deep learning
  10. 10 Historical recognition
  11. 11 The key lesson
  12. 12 Generalization theory for interpolation?
  13. 13 A way forward?
  14. 14 Interpolated k-NN schemes
  15. 15 Interpolation and adversarial examples
  16. 16 Double descent risk curve
  17. 17 More parameters are better: an example
  18. 18 Random Fourier networks
  19. 19 what is the mechanism?
  20. 20 Double Descent in Randon Feature settings
  21. 21 Smoothness by averaging
  22. 22 Framework for modern ML
  23. 23 The landscape of generalization
  24. 24 Optimization: classical
  25. 25 Modern Optimization
  26. 26 From classical statistics to modern ML
  27. 27 The nature of inductive bias
  28. 28 Memorization and interpolation
  29. 29 Interpolation in deep auto-encoders
  30. 30 Neural networks as models for associative memory
  31. 31 Why are attractors surprising?
  32. 32 Memorizing sequences

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.