Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a conference talk on AntMan, a deep learning infrastructure designed to efficiently manage and scale GPU resources for complex deep learning workloads. Discover how this system, deployed at Alibaba, improves GPU utilization by dynamically scaling memory and computation within deep learning frameworks. Learn about the co-design of cluster schedulers with deep learning frameworks, enabling multiple jobs to share GPU resources without compromising performance. Gain insights into how AntMan addresses the challenges of fluctuating resource demands in deep learning training jobs, resulting in significant improvements in GPU memory and computation unit utilization. Understand the unique approach to efficiently utilizing GPUs at scale, which has implications for job performance, system throughput, and hardware utilization in large-scale deep learning environments.
Syllabus
Intro
Deep Learning in productions
Observations: Low utilization
Opportunities
Outline
Dynamic scaling memory
Dynamic scaling computation Exclusive mode
AntMan architecture
Micro-benchmark: Memory grow-shrink
Micro-benchmark: Adaptive computation
Trace experiment
Large-scale experiment
Conclusion AntMan: Dynamic Scaling on GPU Clusters for Deep Learning
Taught by
USENIX