Approximation with Deep Networks - Remi Gribonval, Inria

Approximation with Deep Networks - Remi Gribonval, Inria

Alan Turing Institute via YouTube Direct link

Overall summary & perspectives

22 of 22

22 of 22

Overall summary & perspectives

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Approximation with Deep Networks - Remi Gribonval, Inria

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Introduction
  2. 2 Feedforward neural networks
  3. 3 Studying the expressivity of DNNS
  4. 4 Example: the ReLU activation function
  5. 5 ReLU networks
  6. 6 Universal approximation property
  7. 7 Why sparsely connected networks?
  8. 8 Same sparsity - various network shapes
  9. 9 Approximation with sparse networks
  10. 10 Direct vs inverse estimate
  11. 11 Notion of approximation space
  12. 12 Role of skip-connections
  13. 13 Counting neurons vs connections
  14. 14 Role of activation function 0
  15. 15 The case of spline activation functions Theorem 2
  16. 16 Guidelines to choose an activation ?
  17. 17 Rescaling equivalence with the ReLU
  18. 18 Benefits of depth ?
  19. 19 Role of depth
  20. 20 Set theoretic picture
  21. 21 Summary: Approximation with DNNS
  22. 22 Overall summary & perspectives

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.