Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

The Interpolation Phase Transition in Neural Networks - Memorization and Generalization Lazy Training

Simons Institute via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a seminar on the interpolation phase transition in neural networks, focusing on memorization and generalization under lazy training. Delve into the phenomenon of overparameterization in modern neural networks, where networks can interpolate training sets even with random labels while still achieving good prediction error on unseen data. Examine this concept in the context of two-layer neural networks in the neural tangent regime, considering a simple data model with isotropic feature vectors in d dimensions and N hidden neurons. Investigate the conditions under which networks can exactly interpolate data and the characterization of generalization error for min-norm interpolants with linear target functions. Learn about the empirical neural tangent kernel's minimum eigenvalue and how networks approximately perform ridge regression in raw features with self-induced regularization. Gain insights from Andrea Montanari's research at Stanford University, presented at the Simons Institute's Probability, Geometry, and Computation in High Dimensions seminar series.

Syllabus

The Interpolation Phase Transition in Neural Networks: Memorization and Generalization Lazy Training

Taught by

Simons Institute

Reviews

Start your review of The Interpolation Phase Transition in Neural Networks - Memorization and Generalization Lazy Training

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.