Completed
Deep Networks can avoid the curse of dimensionality for compositional functions
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
One of Three Theoretical Puzzles - Generalization in Deep Networks
Automatically move to the next video in the Classroom when playback concludes
- 1 Intro
- 2 Deep Networks can avoid the curse of dimensionality for compositional functions
- 3 Minimize classification error minimize surrogate function
- 4 Motivation: generalization bounds for regression
- 5 GD unconstrained optimization gradient dynamical system
- 6 Example: Lagrange multiplier
- 7 Explicit norm constraint gives weight normalization
- 8 Overparametrized networks fit the data and generalize
- 9 Gradient Descent for deep RELU networks