TensorFlow Model Optimization - Quantization and Pruning

TensorFlow Model Optimization - Quantization and Pruning

TensorFlow via YouTube Direct link

Reducing memory

11 of 48

11 of 48

Reducing memory

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

TensorFlow Model Optimization - Quantization and Pruning

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Introduction
  2. 2 Why is this important
  3. 3 Benefits of optimization
  4. 4 Resource constrained environment
  5. 5 Application constrained environment
  6. 6 Machine learning opportunities
  7. 7 Machine learning efficiency
  8. 8 Matrix multiply
  9. 9 Goals for optimization
  10. 10 Reducing precision
  11. 11 Reducing memory
  12. 12 Reducing bandwidth pressure
  13. 13 Reduce precision
  14. 14 Linear mapping
  15. 15 The problem
  16. 16 The implications
  17. 17 Quantization is complicated
  18. 18 Its hard to interpret
  19. 19 The model is not enough
  20. 20 Quantization types
  21. 21 Quantization benefits
  22. 22 Quantization tools
  23. 23 Posttraining
  24. 24 TensorFlow flowlight converter
  25. 25 Quantisation types
  26. 26 Highbury quantization
  27. 27 Accuracy
  28. 28 Interior Quantization
  29. 29 Results
  30. 30 Quantization training
  31. 31 Quantization model
  32. 32 Hybrid quantization
  33. 33 Integer quantization
  34. 34 Training scrape
  35. 35 Summary
  36. 36 Neural connection pruning
  37. 37 Stencil pruning
  38. 38 Tensor pruning
  39. 39 TensorFlow pruning API
  40. 40 Pruning schedule
  41. 41 Benefits of pruning
  42. 42 Roadmap
  43. 43 Better target hardware
  44. 44 Feedback
  45. 45 Tools
  46. 46 Questions
  47. 47 Training with integer constellations
  48. 48 Question

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.