Overview
Explore a thought-provoking video lecture that challenges the traditional understanding of generalization in machine learning. Delve into the concept of the "double descent" risk curve, which extends beyond the classic bias-variance trade-off. Discover how highly complex models like deep neural networks can achieve good out-of-sample accuracy despite having nearly zero training error. Learn about the "interpolation threshold" and its significance in modern machine learning practices. Examine the implications of this new perspective on various models, including neural networks and random forests. Gain insights into the mechanisms behind this phenomenon and its potential impact on the field of machine learning.
Syllabus
Introduction
Example
Overfitting
Interpolation Threshold
Random Fourier Features
Conclusion
Taught by
Yannic Kilcher