Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Toward Theoretical Understanding of Deep Learning - Lecture 2

International Centre for Theoretical Sciences via YouTube

Overview

Explore the theoretical foundations of deep learning in this comprehensive lecture by Sanjeev Arora from Princeton University and the Institute for Advanced Study. Delve into the mathematics behind machine learning, focusing on supervised and unsupervised learning techniques. Examine the challenges of overparameterization, optimization, and generalization in deep neural networks. Investigate landscape analysis, trajectory analysis, and the manifold assumption in unsupervised learning. Learn about deep generative models, including Generative Adversarial Networks (GANs), and their associated challenges like mode collapse. Gain insights into cutting-edge research directions and potential areas for theoretical exploration in the field of deep learning.

Syllabus

Date & Time: Tuesday, 12 February,
Date & Time: Tuesday, 12 February,
Date & Time: Wednesday, 13 February,
Start
Toward theoretical understanding of deep learning
Machine learning ML: A new kind of science
Recap:
Training via Gradient Descent "natural algorithm"
Subcase: deep learning*
Brief history: networks of "artificial neurons"
Some questions
Part 1: Why overparameterization and/or overprovisioning?
Overprovisioning may help optimization part 1: a folklore experiment
Overprovisioning can help part 2: Allowing more
Acceleration effect of increasing depth
But textbooks warn us: Larger models can "Overfit"
Popular belief/conjecture
Noise stability: understanding one layer no nonlinearity
Proof sketch : Noise stability -deep net can be made low-dimensional
The Quantitative Bound
Correlation with Generalization qualitative check
Concluding thoughts on generalization
Part 2: Optimization in deep learning
Basic concepts
Curse of dimensionality
Gradient descent in unknown landscape.
Gradient descent in unknown landscape contd.
Evading saddle points..
Active area: Landscape Analysis
New trend: Trajectory Analysis
Trajectory Analysis contd
Unsupervised learning motivation: "Manifold assumption"
Unsupervised learning Motivation: "Manifold assumption" contd
Deep generative models
Generative Adversarial Nets GANs [Goodfellow et al. 2014]
What spoils a GANs trainer's day: Mode Collapse
Empirically detecting mode collapse Birthday Paradox Test
Estimated support size from well-known GANs
To wrap up....What to work on suggestions for theorists
Concluding thoughts
Advertisements
Q&A

Taught by

International Centre for Theoretical Sciences

Reviews

Start your review of Toward Theoretical Understanding of Deep Learning - Lecture 2

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.