Inside TensorFlow - tf.distribute.Strategy

Inside TensorFlow - tf.distribute.Strategy

TensorFlow via YouTube Direct link

setup

17 of 31

17 of 31

setup

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Inside TensorFlow - tf.distribute.Strategy

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Intro
  2. 2 A class with multiple implementations
  3. 3 Data parallelism
  4. 4 Parameter servers and workers
  5. 5 Central Storage
  6. 6 Mirrored Variables
  7. 7 All-reduce algorithm
  8. 8 Ring all-reduce
  9. 9 Hierarchical all-reduce
  10. 10 OneDevice Strategy
  11. 11 Parallel input preprocessing: coming
  12. 12 What changes when you switch strategies?
  13. 13 # Training with Keras
  14. 14 # Training with Estimator
  15. 15 Concept: Mirrored vs. per-replica values
  16. 16 Support computations following this pattern
  17. 17 setup
  18. 18 loss, optimizer
  19. 19 # Custom training loop, part 3: each replica
  20. 20 Concept: Modes
  21. 21 all replicas
  22. 22 outer loop
  23. 23 Default Strategy
  24. 24 # Average loss using the global batch size
  25. 25 # Optimizer implementation, part 1
  26. 26 merge_call(fn, args) is our secret weapon
  27. 27 # Optimizer implementation, part 2
  28. 28 Concept: Replica vs. variable locality
  29. 29 One standard pattern for updating state
  30. 30 # Example: Mean metric
  31. 31 Questions?

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.