The Effectiveness of Nonconvex Tensor Completion - Fast Convergence and Uncertainty Quantification

The Effectiveness of Nonconvex Tensor Completion - Fast Convergence and Uncertainty Quantification

Institute for Pure & Applied Mathematics (IPAM) via YouTube Direct link

Key proof ideas leave one-out decoupling

14 of 16

14 of 16

Key proof ideas leave one-out decoupling

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

The Effectiveness of Nonconvex Tensor Completion - Fast Convergence and Uncertainty Quantification

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Intro
  2. 2 Imperfect data acquisition
  3. 3 Statistical computational gap
  4. 4 Prior art
  5. 5 A nonconvex least squares formulation
  6. 6 Gradient descent (GD) with random initialization?
  7. 7 A negative conjecture
  8. 8 Our proposal: a two-stage nonconvex algorithm
  9. 9 Rationale of two-stage approach
  10. 10 A bit more details about initialization
  11. 11 Assumptions
  12. 12 Numerical experiments
  13. 13 No need of sample splitting
  14. 14 Key proof ideas leave one-out decoupling
  15. 15 Distributional theory
  16. 16 Back to estimation

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.