Overview
Explore the intersection of neural networks and nonparametric regression in this insightful lecture by Yu-Xiang Wang from UC San Diego. Delve into the reasons behind the superior performance of overparameterized deep learning models compared to classical methods like kernels and splines. Examine how standard hyperparameter tuning in deep neural networks (DNNs) implicitly uncovers hidden sparsity and low-dimensional structures, leading to improved adaptivity. Gain new perspectives on overparameterization, representation learning, and the generalization capabilities of neural networks through optimization-algorithm induced implicit biases such as Edge-of-Stability and Minima Stability. Analyze theory and examples that illustrate how DNNs achieve adaptive and near-optimal generalization, shedding light on their effectiveness in practical applications.
Syllabus
Neural Networks meet Nonparametric Regression: Generalization by Weight Decay and Large...
Taught by
Simons Institute