Completed
Sparsity can be optimized via a convex relaxation
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Inverse Problems under a Learned Generative Prior - Lecture 1
Automatically move to the next video in the Classroom when playback concludes
- 1 Inverse Problems under a Learned Generative Prior Lecture 1
- 2 Examples of inverse problem
- 3 A common prior: sparsity
- 4 Sparsity can be optimized via a convex relaxation
- 5 Recovery guarantee for sparse signals
- 6 Generative models learn to impressively sample from complex signal classes
- 7 How are generative models used in inverse problems?
- 8 Generative models provide SOTA performance
- 9 Deep Compressive Sensing
- 10 Initial theory for generative priors analyzed global minimizers, which may be hard to find
- 11 Random generative priors allow rigorous recovery guarantees
- 12 Compressive sensing with random generative prior has favorable geometry for optimization
- 13 Proof Outline
- 14 Deterministic Condition for Recovery
- 15 Compressive sensing with random generative prior has a provably convergent subgradient descent algorithm
- 16 Guarantees for compressive sensing under generative priors have been extended to convolutional architectures
- 17 Why can generative models outperform sparsity models?
- 18 Sparsity appears to fail in Compressive Phase Retrieval
- 19 Our formulation: Deep Phase Retrieval
- 20 Generative priors can be efficient exploited for compressive phase retrieval
- 21 Comparison on MNIST
- 22 New workflow for scientists
- 23 Concrete steps have already been taken
- 24 Further Theory Needed
- 25 Main takeaways
- 26 Q&A