Toward Theoretical Understanding of Deep Learning - Lecture 2

Toward Theoretical Understanding of Deep Learning - Lecture 2

International Centre for Theoretical Sciences via YouTube Direct link

Correlation with Generalization qualitative check

21 of 42

21 of 42

Correlation with Generalization qualitative check

Class Central Classrooms beta

YouTube playlists curated by Class Central.

Classroom Contents

Toward Theoretical Understanding of Deep Learning - Lecture 2

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Date & Time: Tuesday, 12 February,
  2. 2 Date & Time: Tuesday, 12 February,
  3. 3 Date & Time: Wednesday, 13 February,
  4. 4 Start
  5. 5 Toward theoretical understanding of deep learning
  6. 6 Machine learning ML: A new kind of science
  7. 7 Recap:
  8. 8 Training via Gradient Descent "natural algorithm"
  9. 9 Subcase: deep learning*
  10. 10 Brief history: networks of "artificial neurons"
  11. 11 Some questions
  12. 12 Part 1: Why overparameterization and/or overprovisioning?
  13. 13 Overprovisioning may help optimization part 1: a folklore experiment
  14. 14 Overprovisioning can help part 2: Allowing more
  15. 15 Acceleration effect of increasing depth
  16. 16 But textbooks warn us: Larger models can "Overfit"
  17. 17 Popular belief/conjecture
  18. 18 Noise stability: understanding one layer no nonlinearity
  19. 19 Proof sketch : Noise stability -deep net can be made low-dimensional
  20. 20 The Quantitative Bound
  21. 21 Correlation with Generalization qualitative check
  22. 22 Concluding thoughts on generalization
  23. 23 Part 2: Optimization in deep learning
  24. 24 Basic concepts
  25. 25 Curse of dimensionality
  26. 26 Gradient descent in unknown landscape.
  27. 27 Gradient descent in unknown landscape contd.
  28. 28 Evading saddle points..
  29. 29 Active area: Landscape Analysis
  30. 30 New trend: Trajectory Analysis
  31. 31 Trajectory Analysis contd
  32. 32 Unsupervised learning motivation: "Manifold assumption"
  33. 33 Unsupervised learning Motivation: "Manifold assumption" contd
  34. 34 Deep generative models
  35. 35 Generative Adversarial Nets GANs [Goodfellow et al. 2014]
  36. 36 What spoils a GANs trainer's day: Mode Collapse
  37. 37 Empirically detecting mode collapse Birthday Paradox Test
  38. 38 Estimated support size from well-known GANs
  39. 39 To wrap up....What to work on suggestions for theorists
  40. 40 Concluding thoughts
  41. 41 Advertisements
  42. 42 Q&A

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.