Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Fit Without Fear - An Over-Fitting Perspective on Modern Deep and Shallow Learning

MITCBMM via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the intriguing world of over-parametrization in modern supervised machine learning through this 55-minute lecture by Mikhail Belkin from Ohio State University. Delve into the paradox of deep networks with millions of parameters that interpolate training data yet perform excellently on test sets. Discover how classical kernel methods exhibit similar properties to deep learning and offer competitive alternatives when scaled for big data. Examine the effectiveness of stochastic gradient descent in driving training errors to zero in the interpolated regime. Gain insights into the challenges of understanding deep learning and the importance of developing a fundamental grasp of "shallow" kernel classifiers in over-fitted settings. Explore the perspective that much of modern learning's success can be understood through the lens of over-parametrization and interpolation, and consider the crucial question of why classifiers in this "modern" interpolated setting generalize so well to unseen data.

Syllabus

Intro
Supervised ML
Interpolation and Overfitting
Modern ML
Fit without Fear
Overfitting perspective
Kernel machines
Interpolation in kernels
Interpolated classifiers work
what is going on?
Performance of kernels
Kernel methods for big data
The limits of smooth kernels
Eigenpro: practical implementation
Comparison with state-of-the-art
Improving speech intelligibility
Stochastic Gradient Descent
The Power of Interpolation
Optimality of mini-batch size 1
Minibatch size?
Real data example
Learning kernels for parallel computation?
Theory vs practice
Model complexity of interpolation?
How to test model complexity?
Testing model complexity for kernels
Levels of noise
Theoretical analyses fall short
Simplicial interpolation A-fit
Nearly Bayes optimal
Parting Thoughts I

Taught by

MITCBMM

Reviews

Start your review of Fit Without Fear - An Over-Fitting Perspective on Modern Deep and Shallow Learning

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.