Neural Nets for NLP - Efficiency Tricks for Neural Nets

Neural Nets for NLP - Efficiency Tricks for Neural Nets

Graham Neubig via YouTube Direct link

What About Memory?

8 of 23

8 of 23

What About Memory?

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Neural Nets for NLP - Efficiency Tricks for Neural Nets

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Intro
  2. 2 Why are Neural Networks Slow and What Can we Do?
  3. 3 A Simple Example • How long does a metro-matrix multiply take?
  4. 4 Practically
  5. 5 Speed Trick 3
  6. 6 Reduce # of Operations
  7. 7 Reduce CPU-GPU Data Movement
  8. 8 What About Memory?
  9. 9 Three Types of Parallelism
  10. 10 Within-operation Parallelism
  11. 11 Operation-wise Parallelism
  12. 12 Example-wise Parallelism
  13. 13 Computation Across Large Vocabularies
  14. 14 A Visual Example of the Softmax
  15. 15 Importance Sampling (Bengio and Senecal 2003)
  16. 16 Noise Contrastive Estimation (Mnih & Teh 2012)
  17. 17 Mini-batch Based Negative Sampling
  18. 18 Hard Negative Mining • Select the top n hardest examples
  19. 19 Efficient Maximum Inner Product Search
  20. 20 Structure-based Approximations
  21. 21 Class-based Softmax (Goodman 2001) • Assign each word to a class
  22. 22 Binary Code Prediction (Dietterich and Bakiri 1995, Oda et al. 2017)
  23. 23 Two Improvement to Binary Code Prediction

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.