Deep Decoder: Concise Image Representations from Untrained Networks - Lecture 2

Deep Decoder: Concise Image Representations from Untrained Networks - Lecture 2

International Centre for Theoretical Sciences via YouTube Direct link

Inverse problem

12 of 26

12 of 26

Inverse problem

Class Central Classrooms beta

YouTube playlists curated by Class Central.

Classroom Contents

Deep Decoder: Concise Image Representations from Untrained Networks - Lecture 2

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Deep Decoder: Concise Image Representations from Untrained Networks Lecture 2
  2. 2 Recovering images from few data requires a model for natural images
  3. 3 Models for Natural Images: Wavelets + Sparsity
  4. 4 Models for Natural Images: Sparse Coding
  5. 5 Models for natural images: neural nets trained on large datasets
  6. 6 This talk: Untrained neural nets as a model of natural images
  7. 7 The deep decoder
  8. 8 Compression
  9. 9 Image compression
  10. 10 In contrast to deep decoder, other neural net architectures are complicated
  11. 11 Solving inverse problems with the deep decoder
  12. 12 Inverse problem
  13. 13 Image recovery with models
  14. 14 Denoising performance
  15. 15 Deep decoder is on par with state of the art for denoising
  16. 16 Why does the deep decoder work?
  17. 17 Why does the deep decoder denoise so well?
  18. 18 The deep decoder
  19. 19 Theory: Deep Decoder can only fit so much noise
  20. 20 Denoising rates
  21. 21 Proof
  22. 22 Deep image prior [ Ulyanov et al., '18]
  23. 23 Comparison to denoising with deep image prior [Ulyanov et al., '18]
  24. 24 How can linear upsampling, ReLUs, and liner combinations synthesize images efficiently?
  25. 25 Summary
  26. 26 Q&A

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.