Implicit and Explicit Regularization in Deep Neural Networks
Institute for Pure & Applied Mathematics (IPAM) via YouTube
Overview
Syllabus
Introduction
Why is deep learning so popular
Why does deep learning not work
Supervised learning
Stochastic gradient descent
Local optimization
Prediction error
What we converge to
Implicit Regularization
Stochastic Mirror Descent
Bregman Divergence
Stochastic Mirror Descent Algorithm
Conventional Neural Networks
SMD
Summary
Nonlinear models
Blessing of dimensionality
Distribution of weights
Explicit regularization
Blessings of dimensionality
Taught by
Institute for Pure & Applied Mathematics (IPAM)