Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

The Power of Nonconvex Optimization in Solving Random Quadratic Systems of Equations - Lecture 1

Georgia Tech Research via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the power of nonconvex optimization in solving random quadratic systems of equations in this lecture from the TRIAD Distinguished Lecture Series. Delivered by Princeton University's Yuxin Chen, delve into the effectiveness of convex relaxation techniques and their computational limitations. Discover an alternative approach using nonconvex programs and learn how these algorithms can return correct solutions in linear time under certain conditions. Examine the extension of this theory to noisy systems and understand how the algorithms achieve minimax optimal statistical accuracy. Gain insights into the computational efficiency of this method compared to traditional least-squares problems. Follow the lecture's progression from introduction to practical applications, covering topics such as phase retrieval in imaging science, latent variable models, and neural network learning with quadratic activation. Explore concepts like low-rank factorization, maximum likelihood estimation, spectral initialization, and iterative refinement techniques. Conclude with an analysis of performance guarantees and stability under noisy data conditions.

Syllabus

Intro
Nonconvex optimization may be super scary
Example: solving quadratic programs is hard
Example of convex surrogate: low-rank matrix completion
Example of lifting: Max-Cut
Solving quadratic systems of equations
Motivation: a missing phase problem in imaging science
Motivation: latent variable models
Motivation: learning neural nets with quadratic activation
An equivalent view: low-rank factorization
Prior art (before our work)
A first impulse: maximum likelihood estimate
Interpretation of spectral initialization
Empirical performance of initialization (m = 12n)
Improving initialization
Iterative refinement stage: search directions
Performance guarantees of TWF (noiseless data)
Computational complexity
Numerical surprise
Stability under noisy data

Taught by

Georgia Tech Research

Reviews

Start your review of The Power of Nonconvex Optimization in Solving Random Quadratic Systems of Equations - Lecture 1

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.