Completed
Gradient descent (GD) with random initialization?
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
The Effectiveness of Nonconvex Tensor Completion - Fast Convergence and Uncertainty Quantification
Automatically move to the next video in the Classroom when playback concludes
- 1 Intro
- 2 Imperfect data acquisition
- 3 Statistical computational gap
- 4 Prior art
- 5 A nonconvex least squares formulation
- 6 Gradient descent (GD) with random initialization?
- 7 A negative conjecture
- 8 Our proposal: a two-stage nonconvex algorithm
- 9 Rationale of two-stage approach
- 10 A bit more details about initialization
- 11 Assumptions
- 12 Numerical experiments
- 13 No need of sample splitting
- 14 Key proof ideas leave one-out decoupling
- 15 Distributional theory
- 16 Back to estimation