Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Overfitting: Benign, Tempered and Harmful - Lecture on Machine Learning Regularization

Institute for Pure & Applied Mathematics (IPAM) via YouTube

Overview

Explore a 51-minute conference talk by Michael Murray from the University of Bath, presented at IPAM's Analyzing High-dimensional Traces of Intelligent Behavior Workshop. Delve into the nuanced understanding of overfitting in neural networks, challenging conventional wisdom about regularization and generalization. Examine the surprising phenomenon of models achieving near-zero loss on noisy training data while still performing well in testing. Discover the concepts of benign and tempered overfitting, and investigate how data properties such as regularity, signal strength, and the ratio of data points to dimensions influence different overfitting outcomes. Gain insights into the complex relationship between data characteristics and model performance in the context of a simple data model. Enhance your understanding of machine learning challenges and the factors driving transitions between various overfitting scenarios in high-dimensional data analysis.

Syllabus

Michael Murray - Overfitting: benign, tempered and harmful - IPAM at UCLA

Taught by

Institute for Pure & Applied Mathematics (IPAM)

Reviews

Start your review of Overfitting: Benign, Tempered and Harmful - Lecture on Machine Learning Regularization

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.