Inverse Problems under a Learned Generative Prior - Lecture 1
International Centre for Theoretical Sciences via YouTube
Overview
Syllabus
Inverse Problems under a Learned Generative Prior Lecture 1
Examples of inverse problem
A common prior: sparsity
Sparsity can be optimized via a convex relaxation
Recovery guarantee for sparse signals
Generative models learn to impressively sample from complex signal classes
How are generative models used in inverse problems?
Generative models provide SOTA performance
Deep Compressive Sensing
Initial theory for generative priors analyzed global minimizers, which may be hard to find
Random generative priors allow rigorous recovery guarantees
Compressive sensing with random generative prior has favorable geometry for optimization
Proof Outline
Deterministic Condition for Recovery
Compressive sensing with random generative prior has a provably convergent subgradient descent algorithm
Guarantees for compressive sensing under generative priors have been extended to convolutional architectures
Why can generative models outperform sparsity models?
Sparsity appears to fail in Compressive Phase Retrieval
Our formulation: Deep Phase Retrieval
Generative priors can be efficient exploited for compressive phase retrieval
Comparison on MNIST
New workflow for scientists
Concrete steps have already been taken
Further Theory Needed
Main takeaways
Q&A
Taught by
International Centre for Theoretical Sciences