Accelerate Your GenAI Model Inference with Ray and Kubernetes
CNCF [Cloud Native Computing Foundation] via YouTube
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore how to accelerate Generative AI model inference using Ray and Kubernetes in this informative conference talk. Delve into the challenges of serving massive GenAI models with hundreds of billions of parameters and learn practical solutions for deploying them in production environments. Discover the power of KubeRay on Kubernetes, leveraging hardware accelerators like GPUs and TPUs for enhanced performance. Gain insights into Ray, an open-source framework for distributed machine learning, and its Serve library for scalable online inference. Understand how integrating Ray with accelerators creates a robust platform for serving GenAI models efficiently and cost-effectively. Acquire valuable knowledge on scaling workloads across large clusters of machines and optimizing your Kubernetes platform for cutting-edge AI applications.
Syllabus
Accelerate Your GenAI Model Inference with Ray and Kubernetes - Richard Liu, Google Cloud
Taught by
CNCF [Cloud Native Computing Foundation]