Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the latest advancements in PyTorch and MLFlow for scaling AI research to production in this 44-minute video presentation by Databricks. Dive deep into crucial developments, including model parallel distributed training, model optimization, and on-device deployment. Learn about the newest libraries supporting production-scale deployment in conjunction with MLFlow. Discover how PyTorch's evolution since version 1.0 has accelerated the workflow from research to production. Gain insights into topics such as simplicity over complexity, community involvement, papers with code, challenges in AI development, and code walkthroughs. Understand the importance of model size and compute needs, exploring techniques like pruning and quantization. Examine strategies for training models at scale, deploying on heterogeneous hardware, and managing large models. Delve into remote procedure calls, API overviews, and deployment at scale using PyTorch Service and MLFlow. Stay updated on PyTorch's latest features and domain-specific libraries. Find resources for further education, including books and channels, to enhance your AI research and production skills.
Syllabus
Introduction
Agenda
Simplicity over Complexity
Community
Papers with Code Calm
Facebook
Challenges
Dev Acts
Code Walkthrough
PyTorch Libraries
Model Size and Compute Needs
Pruning
Quantization
Quantization API
Quantization Results
Training Models at Scale
Deploy Heterogeneous Hardware
Adhoc Jobs
PyTorch Elastic
Large Models
Remote Procedure Call
API Overview
Deployment at Scale
PyTorch Service
MLFlow
PyTorch Update
Domain Libraries
Getting Educated
Books
Channels
Taught by
Databricks