Random Initialization and Implicit Regularization in Nonconvex Statistical Estimation - Lecture 2
Georgia Tech Research via YouTube
Overview
Syllabus
Intro
Statistical models come to rescue
Example: low-rank matrix recovery
Solving quadratic systems of equations
A natural least squares formulation
Rationale of two-stage approach
What does prior theory say?
Exponential growth of signal strength in Stage 1
Our theory: noiseless case
Population-level state evolution
Back to finite-sample analysis
Gradient descent theory revisited
A second look at gradient descent theory
Key proof idea: leave-one-out analysis
Key proof ingredient: random-sign sequences
Automatic saddle avoidance
Taught by
Georgia Tech Research