Completed
# Training on multiple GPUs with Distribution Strategy
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Distributed TensorFlow Training - Google I/O 2018
Automatically move to the next video in the Classroom when playback concludes
- 1 Intro
- 2 Training can take a long time
- 3 Scaling with Distributed Training
- 4 Data parallelism
- 5 Async Parameter Server
- 6 Sync Allreduce Architecture
- 7 Ring Allreduce Architecture
- 8 Model parallelism
- 9 Distribution Strategy API High Level API to distribute your training.
- 10 # Training with Estimator API
- 11 # Training on multiple GPUs with Distribution Strategy
- 12 Mirrored Strategy
- 13 Demo Setup on Google Cloud
- 14 Performance Benchmarks
- 15 N A simple input pipeline for ResNet58
- 16 Input pipeline as an ETL Process
- 17 Input pipeline bottleneck
- 18 Parallelize file reading
- 19 Parallelize sap for transformations
- 20 Pipelining with prefetching
- 21 Using fused transformation ops
- 22 Work In Progress
- 23 TensorFlow Resources