From Classical Statistics to Modern ML - The Lessons of Deep Learning - Mikhail Belkin

From Classical Statistics to Modern ML - The Lessons of Deep Learning - Mikhail Belkin

Institute for Advanced Study via YouTube Direct link

"Double descent" risk curve

16 of 27

16 of 27

"Double descent" risk curve

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

From Classical Statistics to Modern ML - The Lessons of Deep Learning - Mikhail Belkin

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Intro
  2. 2 Empirical Risk Minimization
  3. 3 The ERM/SRM theory of learning
  4. 4 Uniform laws of large numbers
  5. 5 Capacity control
  6. 6 U-shaped generalization curve
  7. 7 Does interpolation overfit?
  8. 8 Interpolation does not overfit even for very noisy data
  9. 9 why bounds fail
  10. 10 Interpolation is best practice for deep learning
  11. 11 Historical recognition
  12. 12 where we are now: the key lesson
  13. 13 Generalization theory for interpolation?
  14. 14 Interpolated k-NN schemes
  15. 15 Interpolation and adversarial examples
  16. 16 "Double descent" risk curve
  17. 17 Random Fourier networks
  18. 18 what is the mechanism?
  19. 19 Is infinite width optimal?
  20. 20 Smoothness by averaging
  21. 21 Double Descent in Random Feature settings
  22. 22 Framework for modern ML
  23. 23 The landscape of generalization
  24. 24 optimization: classical
  25. 25 The power of interpolation
  26. 26 Learning from deep learning: fast and effective kernel machines
  27. 27 Points and lessons

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.