Explore strategies for detecting and overcoming GPU failures during machine learning training in this 43-minute conference talk by Ganeshkumar Ashokavardhanan from Microsoft and Sarah Belghiti from Wayve. Delve into the challenges of GPU failures in the context of ML training, particularly distributed training, as model sizes and training scales increase. Discover the spectrum of GPU issues and learn why even minor performance drops can significantly impact large jobs. Gain insights into using observability tools like NVIDIA DCGM for proactive problem detection through GPU health checks. Understand the principles of fault-tolerant distributed training to mitigate the impact of GPU failures. Drawing from cloud provider and autonomous vehicle company experiences, learn best practices for efficient identification, remediation, and prevention of GPU failures. Explore cutting-edge ideas such as CRIU and task pre-emption for GPU workloads to enhance training resilience and efficiency.
Overview
Syllabus
Detecting & Overcoming GPU Failures During ML Training- Ganeshkumar Ashokavardhanan & Sarah Belghiti
Taught by
Linux Foundation