Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Discover how to access GPUs for AI/ML workloads easily and affordably in this 40-minute conference talk from Google Cloud Next 2024. Learn about the Dynamic Workload Scheduler (DWS) and its practical applications, as well as various Compute Engine consumption models including on-demand, spot, and future reservations. Gain insights into addressing the exponential demand for GPU capacity caused by the growth in AI/ML training, fine-tuning, and inference workloads. Presented by speakers Ari Liberman and Laura Ionita, this session provides valuable information for those seeking efficient solutions to GPU scarcity in AI/ML projects.
Syllabus
How to get easy and affordable access to GPUs for AI/ML workloads
Taught by
Google Cloud Tech