The Effectiveness of Nonconvex Tensor Completion - Fast Convergence and Uncertainty Quantification
Institute for Pure & Applied Mathematics (IPAM) via YouTube
Overview
Syllabus
Intro
Imperfect data acquisition
Statistical computational gap
Prior art
A nonconvex least squares formulation
Gradient descent (GD) with random initialization?
A negative conjecture
Our proposal: a two-stage nonconvex algorithm
Rationale of two-stage approach
A bit more details about initialization
Assumptions
Numerical experiments
No need of sample splitting
Key proof ideas leave one-out decoupling
Distributional theory
Back to estimation
Taught by
Institute for Pure & Applied Mathematics (IPAM)