Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the intricacies of running machine learning workloads on specialized AI hardware using Docker in this 35-minute talk by AWS Developer Advocate Shashank Prasanna. Delve into the evolution of specialized processors, from early coprocessors to modern GPUs and AI accelerators like AWS Inferentia and Intel Habana Gaudi. Discover how Docker containers adapt to heterogeneous systems with multiple processor types, ensuring scalability and maintaining their benefits. Gain insights into the future of machine learning workloads across diverse AI silicon, including GPUs, TPUs, and emerging technologies. Learn about the crucial role containers play in managing these complex, multi-processor environments for efficient machine learning deployments.
Syllabus
How does Docker run machine learning on specialized AI
Taught by
Docker