Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Distributed TensorFlow Training - Google I/O 2018

TensorFlow via YouTube

Overview

Learn how to efficiently scale machine learning model training across multiple GPUs and machines using TensorFlow's distribution strategies in this 35-minute Google I/O '18 conference talk. Explore the Distribution Strategy API, which enables distributed training with minimal code changes. Discover techniques for data parallelism, synchronous and asynchronous parameter updates, and model parallelism. Follow a demonstration of setting up distributed training on Google Cloud and examine performance benchmarks for ResNet50. Gain insights into optimizing input pipelines, including parallelizing file reading and transformations, pipelining with prefetching, and using fused transformation ops. Access additional resources and performance guides to further enhance your distributed TensorFlow training skills.

Syllabus

Intro
Training can take a long time
Scaling with Distributed Training
Data parallelism
Async Parameter Server
Sync Allreduce Architecture
Ring Allreduce Architecture
Model parallelism
Distribution Strategy API High Level API to distribute your training.
# Training with Estimator API
# Training on multiple GPUs with Distribution Strategy
Mirrored Strategy
Demo Setup on Google Cloud
Performance Benchmarks
N A simple input pipeline for ResNet58
Input pipeline as an ETL Process
Input pipeline bottleneck
Parallelize file reading
Parallelize sap for transformations
Pipelining with prefetching
Using fused transformation ops
Work In Progress
TensorFlow Resources

Taught by

TensorFlow

Reviews

Start your review of Distributed TensorFlow Training - Google I/O 2018

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.