Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Inverse Problems under a Learned Generative Prior - Lecture 1

International Centre for Theoretical Sciences via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the theoretical foundations of inverse problems using learned generative priors in this 50-minute lecture by Paul Hand at the International Centre for Theoretical Sciences. Delve into examples of inverse problems, sparsity as a common prior, and how generative models can impressively sample from complex signal classes. Examine the use of generative models in inverse problems, their state-of-the-art performance, and deep compressive sensing. Investigate random generative priors, their recovery guarantees, and favorable geometry for optimization in compressive sensing. Learn about the deterministic conditions for recovery and the provably convergent subgradient descent algorithm. Discover why generative models can outperform sparsity models, especially in compressive phase retrieval. Gain insights into the new workflow for scientists, concrete steps taken, and further theory needed in this field. The lecture concludes with main takeaways and a Q&A session, providing a comprehensive overview of this cutting-edge topic in machine learning theory.

Syllabus

Inverse Problems under a Learned Generative Prior Lecture 1
Examples of inverse problem
A common prior: sparsity
Sparsity can be optimized via a convex relaxation
Recovery guarantee for sparse signals
Generative models learn to impressively sample from complex signal classes
How are generative models used in inverse problems?
Generative models provide SOTA performance
Deep Compressive Sensing
Initial theory for generative priors analyzed global minimizers, which may be hard to find
Random generative priors allow rigorous recovery guarantees
Compressive sensing with random generative prior has favorable geometry for optimization
Proof Outline
Deterministic Condition for Recovery
Compressive sensing with random generative prior has a provably convergent subgradient descent algorithm
Guarantees for compressive sensing under generative priors have been extended to convolutional architectures
Why can generative models outperform sparsity models?
Sparsity appears to fail in Compressive Phase Retrieval
Our formulation: Deep Phase Retrieval
Generative priors can be efficient exploited for compressive phase retrieval
Comparison on MNIST
New workflow for scientists
Concrete steps have already been taken
Further Theory Needed
Main takeaways
Q&A

Taught by

International Centre for Theoretical Sciences

Reviews

Start your review of Inverse Problems under a Learned Generative Prior - Lecture 1

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.